Robot see, Robot Do
It’s not just the Japanese and Americans who are building humanoid robots. An ambitious coalition of researchers from German, French, and Italian universities and research labs recently met in Berlin to tackle the problem of how bots can autonomously learn and develop motor skills in open-ended environments. Dubbed AMARSi for Adaptive Modular Architecture for Rich Motor Skills, this EU-funded open source project is focused on one major goal: a qualitative jump in robotic motor skills that emulates the richness of the biological world. The researchers would ultimately like to see bots learn from doing as their co-workers do — rather than being told what to do.
Gross motor skills include sitting up, balancing, crawling, and walking. Fine motor skills — generally harder for humanoid robots — involve precise movements to achieve delicate tasks such as picking up and manipulating small objects, cutting, or threading a needle. In humans and other vertebrates, these activities are controlled by motor neurons in the central nervous system (CNS). AMARSi intends to apply dynamic neural networks, new robotics hardware designs, and complex software algorithms to enable bots to learn from data provided by movement and dynamically rewire their circuits to process and store new knowledge.
Willow Garage is also researching robotic sensing, communications, and motor control. Willow Garage’s PR2 robot illustrates some of the problems in developing robotic motor control with a human-controlled gripper in this video:
The AMARSi approach to a problem like gripping will rely on a biologically-inspired view of motor skills that goes beyond traditional robotic designs, says project coordinator Jochen Steil in Wired. AMARSi researchers hope their architecture will enable robots to learn by interaction, which involves a combination of kinesthetic learning, imitation, and exploration. To develop advanced, autonomous robotic systems, researchers need to both reverse- and forward-engineer biological systems. The ultimate aim is that a co-worker might first show the bot what to do (just as it would with a human) and then signal a reward. The co-worker might also initially help the bot to perform a task on an assembly line or in another workplace environment and keep it from falling over or making some other mistake.
The initial research will done using the toddler-like iCub bot, developed by the RobotCub Consortium. The iCub will be shown how to crawl and climb across obstacles to reach a doll placed on a sofa. This may require an occasional assist from a human — just as you might lovingly grab your own 1-year-old when she is about to fall over. The hope is that the AMARSi architecture will enable the iCub to learn from its experience and to improve its skills to crawl or climb independently of humans. Here’s a video showing a prototype iCub robot:
Project coordinator Steil comments, “Crawling looks simple, but it is not. It involves many simultaneous tasks, like balancing, reaching out to targets, and meeting goals. This is exactly what robots cannot do right now. Robots can only do one thing at a time, such as head for a target, or make an arm movement to reach something. Humans integrate all these complex skills. Robots can’t.”
You’ve undoubtedly seen the Boston Dynamics quadruped BigDog robot. The Biorobotics Lab at EPFL in Switzerland has developed a similar quadruped robot platform named “Cheetah” — really a small Tabby house cat when compared with the BigDog’s Great Dane stature — that will also be used in the AMARSi project. Researchers are interested in developing and using the Cheetah robot platform as a testing platform and toolkit for control algorithms and locomotion principles based on kinematic measurements on a real hardware platform. The idea here is that the Cheetah bot will be able to play an open-ended ball game with a human — something quite simple for a Little Leaguer, but quite a complex task for a bot attempting to track the location of a ball in 3D space.
Crawling involves many simultaneous tasks, like balancing, reaching out to targets, and meeting goals. This is exactly what robots cannot do right now.
With a €7 million investment and a coalition of 10 partners, it appears the EU is not standing still in the race to develop viable robotics learning systems. The next AMARSi event is a workshop scheduled for September 2010. It remains to be seen whether or not the AMARSi approach of “do as I do, not as I say” will ultimately be effective in teaching gross and fine motor skills to robots. It does seem probable, however, that the human-like robots of the future will need to learn not just from us humans, but from each other as well.