Can He Make The Dancing Hexapod Robot Happy?
The news at the University of Arizona’s Engineering Online web site is that Matt Bunting’s now famous spider-like “hexapod” robot is hitting the road over the next several months. The YouTube video of his hexapod has gone viral to the accompaniment of 12 Cent Dwarf’s Flamenco guitar music:
12 Cent Dwarf turns out to be the young electrical engineering student himself — and his hexapod appears to “dance” perfectly synchronized to one of Matt’s own music compositions.
When not composing music or riding one of his unicycles, Matt has been building robots for as long as he can remember. The hexapod was built for a Spring 2009 cognitive robotics class. After completing the project (he got an A) using a “Frankenstein” of spare parts, he was immediately offered a job in a robotics lab by U. of A. professor Anthony Lewis. His use of an Intel Atom processor caught the attention of product manager Stewart Christie at Intel. Christie offered Bunting the latest and greatest Intel processors to build two more hexapods. Intel will use one of them to tour the county showing off potential applications of its devices.
h+: Your “dancing hexapod” video is getting close to 100,000 hits now on YouTube. Did you have any idea your little hexapod would become such a star when you built it?
MB: I really had no idea that it would become so popular, but naturally it’s something that I think everyone hopes for when they post a YouTube video. There are other incredible hexapods out there which inspired me that have also gone viral (the A-pod for example).
h+: I read somewhere that you started out building robots with the Lego Mindstorms robot kit. Did you take a class in elementary school? What was your favorite Lego project?
MB: I actually started building robots before Mindstorms came out, and needless to say I was very excited to hear about Lego Mindstorms when it came out. I never took a class where we were able to play with Legos, but my parents were always very supportive of my interests. Hmm… my favorite Lego project is a tough question. I wish now that I took pictures of all my builds so I can remember them more easily. One that I seem to recall is a tracked vehicle where I mounted a camera that would feed back live video, and I would drive it under the house beds, looking for my cat.
h+: Your Facebook wall said something about sending a bot into the girl’s locker room at school…
MB: (Laughs) Yes, I think that was when I was age 13. I built a robot using standard servo motors and modified them to rotate continuously. I attached a wireless video camera to the little wheeled rover which fed back to a small battery powered TV that I mounted on the controller with hot glue. The rover could fit in the palm of my hand, so I brought it to school quite a bit. My friends (including girls) then persuaded me to drive it in the girls locker room, so I drove it briefly in and drove it out. No one was in the locker room at the time though.
h+: You’re working with Stewart Christie at Intel using the latest Intel Atom processor, right?
MB: I’m still in my senior year at the University of Arizona, but I am in contact with Stewart to get him some hexapods with the Atom processor inside. I do not work for Intel though. I chose to use the Atom processor because I wanted to implement image processing algorithms onboard. Before, I had done most of the processing off-board on my laptop, so having everything on the hexapod makes things much easier to implement at a greater efficiency. Lately I have been programming using multi-threaded code, which is completely blowing away everything I have done previously. I am very happy with the Atom processor – it’s absolutely perfect for mobile robotics.
h+: The learning and adaptive algorithms that you developed for the hexapod enable it to evolve a new navigation strategy every time you power it up, correct? Did you develop the algorithms on your own?
MB: The learning mechanism is a standard method of Q-learning [Editor’s note: this is a means of learning through experience without a teacher – the bot computes possible actions relative to a goal state, and then selects one], so the hexapod can learn how to operate the motors to move around in the environment.
At the start of the program, there is no knowledge of how to move the motor elegantly to walk around — that is, no use of inverse kinematics [Editor’s note: a type of motion planning that determines what range of motion is possible]. The robot then experiments by changing the state of the motors, and observes the reward sensor from the transition between two states. The reward sensor was a Logitech Communicate Deluxe which measures optic flow between each transition, and based on the vector field the hexapod is able to know how the state transition moved the hexapod body.
One thing to note is that the YouTube video is not the hexapod learning, but actually an implementation of inverse kinematics. The video is just a demonstration of the platform used to learn to walk. The algorithms are common in the area of machine learning studies, but were uniquely applied with the use of vision as the reward sensor and applied to the hexapod platform.
h+: Does the hexapod develop a map of its surroundings so that it can find its way back to its starting point? Have you tested it in different terrains such as sand, rock, and so forth?
MB: During an Introduction to Robotics course, I found I was interested in the link between vision and legged locomotion. There is a plethora of information that can be extracted from a simple camera, so I really enjoy exploring these areas. In the project for this class, I used optic flow information to build a 3D reconstruction of the terrain, which was then used as a map for terrain adaptation. The project was successful, and the robot was able to step on top of and walk over a couple of books on the ground. Previously what has been done is the addition of touch sensors on the foot of each leg, and the robot will lower each foot until the foot touches the ground, which is a very slow method. The vision allows the ability to have a good estimate of where the next obstacle is and to plan foot stepping accordingly.
h+: I understand that you’d like to program some basic emotions into the hexapod. Are you using existing algorithms for this? How do you see emotions benefiting the hexapod’s performance?
MB: The application of emotions is a tough one for me. It is very easy to add a floating point variable called "happiness,” but the main issue is how to modify the emotion and how to deal with it. If the hexapod is happy, then how is the behavior different than when it is sad? What will cause the hexapod to be interested in something? I can easily have the hexapod tell if it sees a face, so emotions can be modified based on interaction with a human. I have not implemented it, but I certainly have thought about it. It would be really cool to have a hexapod that is essentially a pet in my house, and having emotions is essential in my mind.
Oh, and I forgot to mention that I’m allergic to cats and dogs. So clearly I need to build my synthetic pets to keep me company when I’m at home alone… (Chuckles)
h+: Hey, I’d buy one of your robot pets! I think it’s going to be a big up-and-coming industry.
MB: Definitely. The Sony Aibo was supposed to be a substitute for real animals in apartments in Japan, since many apartments do not allow pets in the buildings. The Japanese philosophy of the differences between animate and inanimate objects is much different than our views in Western culture. I think robotic pets are likely to be much more successful in the East rather than here.
h+: Any plans to build humanoid robots?
MB: Stewart contacted me recently about an Atom-powered humanoid robot, and asked me specifically if I could make it dance. He may have plans to send me the humanoid, so I may explore that area. The lab I work at is actually researching bipedal locomotion, so I may need to delve my interests into that area as well.
h+: Do you think humanoid robots can be programmed to make ethical decisions – particularly on the battlefield? Do you think a "Terminator" scenario can be avoided?
MB: (Laughs) Ah yes, the classic Terminator scenario… this actually ties directly with the difference between Japanese and Western philosophy. We tend to think that whenever a robot gets near the point of having a “soul,” that scary things will happen like the robots in the Terminator movies. The Japanese view is that inanimate and animate objects are the same, so building robots that get near the point of having a soul would simply benefit humanity.
My friends (including girls) then persuaded me to drive it in the girls locker room, so I drove it briefly in and drove it out.
Programming ethics is a very interesting subject, especially when the robot needs to make judgments. I saw a video of an Asimo robot making a judgment on whether or not an object was a chair, with impressive results. Perfecting and applying this to robots so that they can make ethical decisions on the battlefield is incredibly controversial, for obvious reasons. There are many reasons why the current military robots are purely teleoperated. I would say that we are many, many years out from having an autonomous war robot.
h+: Were there any particular robots in film or fiction that inspired you when you were younger, or that continue to inspire you?
MB: It’s really hard to say what inspired me. Of course, most of the "coolest" robots were the ones in the scary movies like the Terminator series, and the biological-machine-fused robots in Virus. I grew up with “Johnny 5” from the movie Short Circuit and enjoyed the robots from the movie Batteries Not Included.