H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Ben Goertzel
August 11, 2011


Readers with an artistic bent – please see the end of this article for a contest that may interest you!

Robotics technology is advancing wonderfully and rapidly — but is it advancing in the right direction? Contemporary industrial robots are great for certain narrowly specialized applications, but lack the flexibility needed (in multiple senses) to perform many of the tasks that come easily to humans or animals. Nearly all robot research takes place in robot labs, in carefully constructed and isolated environments intended to work around the limitations of current robot technology.

From an AGI point of view, the limitations of current robots are particularly severe. Robot vision is fairly advanced, but all commercially available robots currently lack a decent sense of touch. The robots' rigid "skin" doesn't feel anything! And the robots can't feel the insides of their bodies either — no kinesthesia! Human eye-hand coordination combines vision, muscle movement and kinesthesia in exquisite ways; human walking relies intimately upon both kinesthesia and touch (as the recent rage for "barefoot shoes" highlights).

Nearly every part of the human body is both a sensor and an actuator, in some sense. But today's robots are mostly inert, inactive and nonresponsive material, with a few sensors and actuators tacked on here and there. The human body is a massively parallel, intelligent, self-organizing system whose intelligence gloriously synergizes with that of the human brain. Current robot bodies are, well, "robotic" in the secondary sense that word has come to hold: stiff, graceless and non-reactive. They move in a manner indicating the fact that the various parts of their bodies aren't really aware of how they're moving, and hence can't adapt their movements fluidly and contextually.

We generally take for granted that robots must be this way, at least for the near future — but is this really true? Sometimes a field adopts a technology direction due to reasons of chance or short-term convenience, and then gets locked into this direction when other possibilities would have been equally feasible, and perhaps preferable in a long-term sense. Perhaps there are other, radically different paths for robotics incorporating more aspects of biological bodies that would serve some critical purposes (such as artificial general intelligence) better in the coming years and decades.

The only ways to create robot bodies truly closely analogous to human bodies would be to emulate molecular biology, or else to develop some other sort of nanotechnology with comparable complexity. These are viable and laudable research approaches, but require a lot of preliminary steps of uncertain difficulty and duration before they can yield practical results for robotics. However, there may be alternatives that lie, in some senses, halfway between molecular biology-based robots and current inflexible “lifeless” robot bodies. One class of such alternatives, which I’ve been thinking about for a while, is “inorganic macrocell bots” or Imbots.

Imbots, as I intend the term, would be robots made of multifunctional, flexible “balls”, that may be very loosely thought of as large-scale “cells” composing the robot. The macrocells could be various sizes, but what I’ve thought most about is the case of macrocells with radii between, say, a quarter centimeter and a centimeter.

Perhaps the most critical sort of macrocell would be what I think of as a “tactile / muscle ball” (“TM cell” for short). A simple form of TM cell would have the following properties:

  • Spherical by default
  • Containing a programmable microprocessor
  • Able to rapidly deform into varyingly elongated ellipsoidal shapes, based on its programmed instructions
  • Able to connect to other balls (including TM cells), in a way that transmits or obtains electricity and/or a stream of bits with them
  • Able to sense pressure on any part of its external surface, and rapidly register this sensation in its processor’s memory

By connecting together TM cells, one could potentially make a robot able to move about in a quite flexible and responsive way. Note that every part of the robot (at least, every macrocell, not the insides of the macrocells) would be able to both act and feel. So the robot would have sensors all over the outside of its body, and inside its body as well. If a robot of this kind walked, for example, it would not do so by the sole activity of a small number of joints (like current robots), but rather by the coordinated stretching of a host of interconnected balls, which could adapt their expansion and contraction based on the pressure they sensed from various directions.



TM cells could be augmented by various other sorts of specialist balls such as eye balls with cameras in them, ear balls, speaker balls, power balls containing batteries or connectors to external power sources, roller balls allowing wheeled movement, etc. Some of these other balls might be rigid in shape, but it would be preferable if they were also pressure-sensitive all around.

Certainly, the creation of TM cells and their kin would require substantial engineering effort. However, it’s not clear that this would exceed the effort required to incrementally improve conventional robotics in the direction of increased AGI-friendliness and/or biologicality. Conventional robotics is based on the engineering of increasingly high-precision components, which is certainly desirable for many industrial robotics applications. However, Imbots may be able to achieve impressive functionality with components displaying relatively imprecise behavior, due to the emergent processes arising when autonomously sensing/acting components are appropriately networked together.

Conventional robotics has often aimed toward a sort of “robotic precision” that biological organisms generally don’t have and that simpler incarnations of Imbots would likely also not possess. However, there are multiple obvious and subtle evolved synergies between the imprecision of biological bodies and the intelligent complexity of biological bodies. If Imbots can emulate this synergy at an abstract level, without emulating the particulars of biological bodies, then they may be able to achieve human body-like behaviors more readily than conventional robots, in spite of their lesser component-wise precision. It’s not hard to envision hybrid robots as well, combining the flexibility of inorganic macrocells with the precision of conventional robotics.

For instance, we humans are very poor at rotating our joints via precisely specified degrees. Try rotating your shoulder 30 degrees theta and 17 degrees phi; your elbow 130 degrees theta and 210 degrees phi and your wrist 23 degrees theta and 177 degrees phi — how accurately can you do it? By contrast, some current robots are great at this, enabling them to be programmed to carry out certain manufacturing tasks consistently and effectively. However, humans can often achieve fairly precise movements without this sort of accuracy, via eye-body coordination and proprioception. This enables humans to carry out many manipulation tasks current robots can’t, in spite of their incredible precision at executing individual movements. A hybrid system might contain, for example, arms built for high precision according to a conventional robotics methodology, combined with hands and fingers built using macrocells (and designed to operate via proprioception and eye-hand coordination).

All this may sound quite speculative, but -- as my colleague Joel Pitt pointed out in a comment on a previous version of this article -- there is already concrete and exciting work in this general direction going on, under the name of Claytronics.  Check out the cool videos on the Claytronics site at Carnegie Mellon.  I don't know if Claytronics is the technology via which Imbots for AGI will ultimately be achieved, but it certainly seems promising.   If nothing else it's a proof of principle that this sort of direction is real science and engineering, not just wishful thinking.

My own professional interest in Imbots comes from the AGI direction and I must admit that, as an AGI researcher, I have mixed feelings about spending my time thinking about robotics. I have no doubt that human intelligence is deeply tied up with human embodiment, and that current robots lack multiple sorts of cognitive-relevant flexibility that human bodies possess. So I’m quite confident that embodying early-stage AGI systems in more flexible robot bodies would aid considerably in their cognitive advancement (although I’m not as extreme as some, who claim that human-like embodiment is a prerequisite for achieving human-level AGI; I think there may be many different paths). However, my general research approach is to focus on the more obviously cognitive aspects of AGI and leave robotics to the roboticists. But after increasing reflection, I’ve become concerned that robotics, driven by the requirements of the industrial robotics industry, may be going in a direction that’s badly suboptimal from an AGI perspective. The notion of Imbots is an attempt to counteract this by suggesting a variety of robot that may work better in terms of giving early-stage AGI systems what they need.

An AGI system connected to an Imbot would likely experience its body as continuous with its mind, much as we humans do. It would likely relate to its physical actions and perceptions in much the same way as it relates to its mental actions and perceptions, because the greater flexibility of the Imbot body more closely resembles the flexibility of the internal mental universe (as opposed to the rigidity and sensory opacity of typical robot bodies). It would likely develop a sense of the relationship between itself and its surroundings more similar to the human sense of self and environment, because its macrocells would provide it with sensitive whole-body tactility. Its sense of empathy and emotional connection would more closely resemble that of humans, because it would be able to achieve pleasure from touching and being touched in a variety of ways and would be able to mirror the observed actions and perceptions of others more sensitively. Of course, exploiting an Imbot’s potential for AGI would require appropriate software for cognition, perception and action — but my feeling is that if reasonably able Imbots were developed, the appropriate software would emerge in synergy, derived from current proto-AGI software projects (such as the OpenCog project that I co-founded). Current robots have naturally developed in synergy with engineering control theory and Imbots would naturally develop in synergy with more flexible, adaptive-learning approaches to machine intelligence.

At this point Imbots are just an idea, but part of my reason for writing this article is in the hopes of spurring R&D pointing in an Imbot direction. Imbots would seem ideal for a DIY hardware approach, although efforts from large, well-funded corporate or government research labs would certainly also be valuable.

Finally, as I’m not a talented visual artist, I’ve decided to launch an Imbot contest. If you feel inspired to create a digital image of one or more Imbots, illustrating the ideas described in this article, please email it to me at ben-at-goertzel.org with “Imbot image” in the subject line. If your image is the one I judge to best evoke the Imbot idea, your image will get included in a revision of this article (and likely future publications on Imbots), and I will PayPal you $100 (or, if you prefer, make a $100 donation to Humanity+ in your name). Runner-up images may also be features in the revised article.

Dr. Ben Goertzel is CEO of AI software company Novamente LLC and bioinformatics company Biomind LLC; co-leader of the open-source OpenCog Artificial General Intelligence software project; Vice Chairman of futurist nonprofit Humanity+; Chief Technology Officer of biopharma firm Genescient Corp.; Director of Engineering of digital media firm Vzillion Inc.; Advisor to the Singularity University and Singularity Institute; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence conference series.

13 Comments

    regarding your artwork request, are you looking for something with a rigid endoskeleton or something else, like a cephalopod?

      ahh...perhaps both...

    After you discussed this idea with me, I realised is very similar to the field of "claytronics" - a field that's been around for a few years and which is a good google term for people that are interested in reading more or making it reality.

    1, Have you considered making contact with Dean Kamen ? He is making some strides towards "natural" movement with his development of prostheses for amputees......
    2. It seems that the Imbot may well have to be a rule-based 'animal'. Surely that idea was thrown out long ago?

      IMHO, logic would say to go both ways via prosthetics and robotics. The trial and error related to the increased level of kinesthesia for the human would be of benefit toward the hypothetical ImBot.

      Since the idea is to loosen the restraints related to precision programming, the pool of data/experience would be wider. The push toward AGI would offer, eventually, response/reaction on the ImBot as well as human/prosthetic device.

    Hey Ben, your old partner Ken S. from Webmind here. I think there are a number of people working on and who have conceived of similar objects made up of smaller components that through contraction, expansion and interaction via electromagnetic or physical constraints allow a coordinated effort resulting in controlled, balanced motion of the whole. Certainly, several popular movies have toyed with the idea. I think it a good one and hope it succeeds. I would be inclined to think more on ths subject and try to come up with a practical design when I have a free moment.

    thanks Joel -- I added a link to Claytronics into the article, after reading your comment ;-) ...

    Neil: Kamen's work looks relevant too; and no, I haven't made contact with him. Honestly I'm swamped with AGI stuff -- what I want is for someone else to build the Imbot for me 8-D ... if Dean wants to do it, woo hoo !!!

    Hey Ben,

    I like the direction you are thinking. Macrocell robotics reminds me of Amorphous Computing, a research group I used to be part of.
    Some of the computing paradigms we explored were loosely molecular and cellular developmental biology-inspired, and your article reminded me in particular of Rhadika's PhD thesis.

    Oh, and speaking of bouncy, flexible, emotive robots, have you checked out My Keepon, a robot developed for autism research?

    Robots are in essence inanimate tools to be used, regardless of how much they may be shaped to look like humans. The hope that something close to human can eventually come out of robotics development, though, highlights the truth of Psalm 139:14 regarding how God made humans: "I will praise thee; for I am fearfully and wonderfully made: marvellous are thy works; and that my soul knoweth right well." While we can't figure out how to make a human or reasonably close to one, God figured it out a long time ago. That said, the article and the posts above demonstrate that people are trying.

      No, it doesn't highlight that "truth" actually. I also find myself wondering how something that doesn't exist could figure out how to create anything. Very odd.

    Hi Ben

    Fantastic post!

    The research area that is trying to make something like this a
    reality is called Self-Reconfiguring Modular Robotics (SRCMR), and I am so fascinated
    by the potential it has. I have a blog and a podcast focusing on it (I include
    some links below).

    I think you have an interesting vision, especially the concept of units of the
    same shape in different scale will be a important for the efficiency of the
    created solutions.

    If you think it appropriate Ben, I would like to add another $100 to the price you
    offered, and I would be happy to post any good images etc on my blog!

    For anyone that is interested in more info I can recommend the Workshop on
    SRCMR in conjunction with IROS in San Fransisco in late September
    http://bit.ly/jJmCwc

    More on Self-reconfiguring modular robotics on Wikipedia
    http://en.wikipedia.org/wiki/Self-reconfiguring_modular_robot
    My blog
    http://bit.ly/cBGck7
    My podcast
    http://bit.ly/eZwV8e

    Again, thanks for a great post!
    Per
    per@flexibilityenvelope.com

    I would also collaborate with the people from Festo. (http://www.youtube.com/user/FestoHQ)

    They have the resources to bring something like this to the masses.

    Very impressive concept though, taking object oriented programming to a whole new level!

    Interesting. I'd like to see this from a professional engineer's perspective.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community