Sign In

Remember Me

Slowly, Clunkily Racing Toward the Robot Revolution: Lessons from the DARPA Grand Challenge

I had mixed feelings about the DARPA Robotics Grand Challenge when I first heard about it.    On the one hand, it sounded really cool — making humanoid robots drive cars, climb ladders, use power tools and carry out search and rescue tasks!!!  Wow!  On the other hand, it seemed like a recipe for a lot of smart people to spend a lot of money overfitting complicated robots to very specific tasks.heroic

My skepticism didn’t stop me from helping put together a proposal for the Challenge, though.  Together with David Hanson and some other colleagues, I co-wrote a proposal for a funky humanoid rescue robot (shown at right), which would have been controlled using OpenCog software (along with a human operator).

When our proposal was rejected for funding, I was disappointed yet also relieved.  We could have sought external funding to participate (in fact this was the route the ultimately winning team from KAIST took), regardless of DARPA’s funding decision.  But part of me was happier NOT to have to spend a chunk of my life helping engineer a highly complex system for a particular set of test problems.  (Though for purely aesthetic reasons, it would have been awesome to see David Hanson’s elegant, smiling robot working alongside all the big clunky metallic looking robo-beasts…)

I didn’t attend the Grand Challenge  event in person (it would’ve been a long trip from Hong Kong), but I watched the videos on YouTube along with everyone else, and I saw the Atlas robot (made by Boston Dynamics especially for the Grand Challenge) in action later at Robert Hung’s lab at Hong Kong University.

atlas _12182874_10153822949745921_2369152452291678071_o

Just for fun — me with Opencogger Linas Vepstas, and Marek Rosa and Olga Afanasjeva from GoodAI, visiting Robert Hung’s Atlas robot lab at Hong Kong U, after the Grand Challenge was all done….  Atlas in person is a big, noisy, impressive, scary beast !!

Seoul “Humanoids” Workshop on the DARPA Challenge

Anyway, on Tuesday Nov 3 2015 I attended a fascinating workshop at the Humanoids robotics conference in Seoul, titled “What did we do for the Darpa Robotics Challenge?”  The presenters were leaders of teams that did reasonably well in the recent humanoid-robot search-and-rescue competition.   It made sense for this to happen in Seoul, since the winning Challenge team was from KAIST, a Seoul university.   It was very interesting to hear the various challenges the different teams had faced, and how they’d met them.

The workshop was accompanied by demos of the KAIST, Seoul National University and Robotis robots carrying out various Grand Challenge tasks.   I have to say the robots do qualitatively feel more impressive F2F than on YouTube.  Yeah, they’re big, clunky, slow and awkward; and they fall down sometimes.  But still they WORK, and they DO STUFF.  They go up and down stairs and use power tools, and drive cars and get out the door and walk or roll away.   Clearly a lot of work remains before robots roam the Earth freely and autonomously like living beings.  But anyone with any imagination at all can look at these robots and see where things are headed.

It’s the same feeling I had looking at the first PCs at the end of the 1970s.   Yes, the early PCs kind of sucked. But they did some cool things (you could code your own video games!!) — and it was very, very clear to me where things were going…

Some interesting general things that I learned at the workshop:

1) Many of the robot falls were due to human operator errors.  Clearly the state of the art in robot operator interfaces needs improvement…

2) Lots of software bugs popped up during the competition.  This isn’t surprising — software testing is hard and much of this was research software being tweaked till up near the time of the competition

3) The above two reasons were basically why the robots fell down so much.  Largely operator errors and software bugs, not fundamental hardware or software design flaws, nor excessive difficulty of the tasks.

4) Most teams that participated had pretty good general-purpose hardware and/or software platforms.  But none of these platforms would have been quite enough to perform adequately on the Grand Challenge tasks, without a bunch of special-case tweaking.

5) Teams that did well all paid close attention to the specific tasks involved and in particular to the point allocation system.  Some achievements meriting points were easy to “game” via, say, adding extra gizmos onto one’s robot just for that achievement…   In the end this was not an evaluation of what a bunch of different robot architectures could do — it was a competition to tweak one’s robot hardware and software to carry out some very specific tasks.

6) The Atlas robots supplied by Boston Dynamics were simply not as good as some of the other robots various teams created and entered.  I’ve seen an Atlas going through its paces up close at HKU.   It’s an impressive machine, but the three I saw doing their stuff at the Humanoids conference were all palpably slicker in their movements.  These robots are simply more nimble and agile than the Atlas (though generally speaking all of these robots are big slow clunky metal beasts, in the grand scheme of things).  Also, as Apple and other commercial firms know well, there is advantage to the same team of brains designing both the hardware and the accompanying software.

atlasAtlas Robot in action

7) The lack of integrated whole-body behavior is generally recognized as a problem.  The robots didn’t tend to make much use of their environment in moving around — e.g. using their hands to lean or push on things for support.  This is a specific consequence of the software approaches followed, rather than being mainly a hardware issue.  In many of the architectures, walking-planning was handled very separately from e.g. arm-movement-planning, the two being combined only at a very high level and not in a richly interactive way.   (My suggestion since the early 1990s has been to take a deep learning type approach to movement control as well as to perception.  I was pleased to see a paper on this theme at the Humanoids conference — though not yet applying the idea to whole-body control.  I suppose this is an inevitable next step, though, given the popularity of deep learning today generally and the well-recognized need for whole-body robot control.)

The Hubo Cousins

Jun-Ho Oh and Paul Oh, two cousins (the former based in Seoul at KAIST and the latter at UNLV in Las Vegas) who both led successful Grand Challenge teams, had interesting comments on the reasons for their teams’ success.

CNN 출연한 오준호 교수와 휴보 CNN 인터내셔널 특집프로그램인 '인간과 기계의 미래정상회담(Future Summit:Of Man and Machine)에 세계 저명인사들과 함께 출연한 한국과학기술원(KAIST) 기계공학과 오준호(51) 교수와 인간형 로봇 휴보(HUBO). 사진은 지난해 2월 KAIST에서 열린 휴보 일반인 공개시연회 당시의 모습이다./조용학/과학/ 2006.6.15 (대전=연합뉴스) catcho@yna.co.kr Jun-Ho Oh’s Hubo robot  (Jun-Oh is shown to the right with a different version robot, not customized for the DARPA challenge … see pics below for his team’s Grand Challenge robot) clearly just displayed a superior mechanical design.  Its ability to transform from a walking robot into a 4-wheeled robot gave it a clear advantage (Optimus Prime would be proud!).   When dismounting from a vehicle it seemed to make more graceful use of gravity than, say, the Atlas … rather than awkwardly determining every little movement using its motors in a calculated way.

Paul Oh (to the right, below) talked a lot about the trust between his team and Jun-Ho’s team, which allowed them to share resupaulohlts and ideas freely.  He also noted the importance of open-source software development in their team’s progress.  They had 6 Hubo robots in different locations, and this combined with an OSS methodology allowed different teams to work on different problems, each taking ownership over a certain aspect of the robot’s perception, control or behavior, but all working together in a coordinated way.

 

basel-1_rl191ec Hubo robot, in 4-wheeled mode

These points resonated with me rather well, obviously, since I’m a strong advocate of developing both robotics software (at Hanson Robotics) and AGI software (OpenCog) in an open-source manner.  And our intention at Hanson Robotics is to spread our robots far and wide, enabling and encouraging researchers, hackers and commercial developers to contribute their own code and ideas to the collective open-source robotics/AI codebase.images

And Now What?

I’m glad the DARPA Grand Challenge happened.  It showed the world what today’s robots can do, if largely human-controlled and tweaked for specific tasks … and I think it made a lot more people feel the reality of modern progress in robotics.   The specific robots created for the Grand Challenge may or may not have a big future — certainly they’re not going to be commercial products, given how large and slow-moving they are.  But they have a lot of power as inspirations.

My best guess is that DARPA is not likely to run any more Robotics Challenges in the near future.  I would suppose they don’t want to be typecast as a robotics agency; their next contests will probably be for something different.   I would assume their hope is that, just as the DARPA self-driving vehicle challenges helped seed today’s self-driving car boom, the Robotics Grand Challenge will prod big companies to invest in creating practically functional humanoid robots.

As little as I like the militarism of the US government, I have to admit the US military has done a stellar job of seeding interesting technologies.   I’ve been passing around the following graphic

iphone technology military funding chart png

showing how the key technologies enabling the iPhone were all developed with government funding, and mainly US military funding.  South Korea, the victor in the Grand Challenge, is on the whole another big reminder of the power of state-funded, state-guided technology development.  It was the government putting funding into manufacturing and electronics that lifted South Korea from the poverty it experienced in the 1960s, to the wealthy, advanced-nation status it enjoys today.  For more discussion on this general theme, see this article.

Governments are — relative to other current institutions, at any rate — relatively good at investing in blue-sky technologies and transitioning them from idea to early-stage reality.  Commercial companies, on the other hand, seem to be much better at taking already-prototyped ideas and turning them into actually useful products that people want and will pay for.   With luck the DARPA Grand Challenge has shifted humanoid robotics into a domain where companies can more usefully start refining and productizing — in parallel with the large amount of research that remains to be done to get better-functioning robots (research that will be done largely in academia or in the open source hacker community).

Given how fast tech is advancing these days, it likely won’t be terribly long before significantly greater capability than we see in the Challenge robots is available in smaller, faster, lighter-weight robots at much lower cost.  In fact David Hanson and I are working toward that now, together with some other colleagues in Hong Kong — but that would be a different article, one for another time….