Google AI teaches robots to imitate behaviors of dogs


Google’s blog post published new information as to how its AI technology is coming out with better agility. Google explained that researchers have come up with an AI that learns from the motions of animals to copy from for better mobility. The researchers believe that their innovation could take robots to its next level and help out in real life.

The framework is based on the motion captured of an animal and uses reinforcement learning to train a control policy. Different reference motions helped the researchers to teach a four-legged Unitree Laikago robot to perform a range of activities, ranging from fast walking to hops and turns.

The researchers stored data of real dogs performing such activities, and then later trained with about 200 million samples a simulated robot to imitate the motions shown to them.

To make it work in the real world in real-time, researchers applied a technique that applies the randomized dynamics in the simulation, the values recorded here were applied using an encoder. When it was time to apply the policy to a real robot, the encoder was removed and a set of variables were added that gave the robots the autonomy or rather allowed the robots to execute the skills.

The real-world robot learned to imitate various motions from a dog and performed something called hop-turn. “We show that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire [of] behaviors for legged robots,” wrote the coauthors in the paper. “By incorporating sample efficient domain adaptation techniques into the training process, our system is able to learn adaptive policies in the simulation that can then be quickly adapted for real-world deployment.”

However, the control policy is not entirely fool-proof and it could not learn more dynamic behaviours like large jumps and runs.



No comments

Powered by Blogger.