asuagar

joined 1 year ago
 

Disney Research has developed a reinforcement learning-based pipeline that relies on simulation to combine and balance the vision of an animator with robust robotic motions. For the animator, the pipeline essentially takes care of implementing the constraints of the physical world, letting the animator develop highly expressive motions while relying on the system to make those motions real—or get as close as is physically possible for the robot. Disney’s pipeline can train a robot on a new behavior on a single PC, running what amounts to years of training in just a few hours. According to Bächer, this has reduced the time that it takes for Disney to develop a new robotic character from years to just months.

1
Extreme Parkour with Legged Robots (extreme-parkour.github.io)
submitted 1 year ago* (last edited 1 year ago) by asuagar@lemmy.world to c/robotics_and_ai@mander.xyz
 

... In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with largescale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties.

 

A Dev Robot for exploring ROS2 and Robotics using the Raspberry PI Pico and Raspberry PI 4.