H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Ben Goertzel
January 24, 2011

Computer scientist and futurist thinker J. Storrs Hall has been one of the leading lights of nanotech for some time now.  Via developing concepts like utility fog (see the figure below) and weather machines, he has expanded our understanding of what nanotech may enable.  Furthermore, together with nanotech icon Eric Drexler he pioneered the field of nano-CAD (nano-scale Computer Aided Design), during his service as founding Chief Scientist of Nanorex.

As Hall describes it, “Nanotechnology is based on the concept of tiny, self-replicating robots.  The Utility Fog is a very simple extension of the idea: suppose, instead of building the object you want atom by atom, the tiny robots linked their arms together to form a solid mass in the shape of the object you wanted?  Then, when you got tired of that avant-garde coffee table, the robots could simply shift around a little and you'd have an elegant Queen Anne piece instead.”

Multiple “foglets” joined together in a purpose-specific fashion to form a chair, a tool, a computer, or whatever the design spec tells them.  One major role for artificial general intelligence systems in future may be to work out detailed designs for nanosystems, including utility fogs.

Hall – “Josh” to his friends – has also done a great deal to publicize nanotech concepts.  He founded the sci.nanotech Usenet newsgroup and moderated it for ten years; wrote the book Nanofuture: What's Next For Nanotechnology
and led the nanotech-focused Foresight Institute from 2009-2010.

I’ve known Josh mainly via his other big research interest, artificial general intelligence (AGI).  His recent book Beyond AI: Creating the Conscience of the Machine describes some of his ideas about how to create advanced AI, and also explores the history of the field and the ethical issues related to future scenarios where advanced AIs become pervasive and ultimately more intelligent than humans.

For this interview I decided to focus largely on nanotech – I was curious to pick Josh’s brain a bit regarding the current state of nanotech and the future outlook.  I found his views interesting and I think you will too!  And toward the end of the interview I couldn’t help diverging into AGI a bit as well – specifically into the potential intersection of AI and nanotech, now and in the future.

While at the present time Josh’s work on AI and nanotech are fairly separate, he foresees a major convergence during the next decades.  Once nanotechnology advances to the point where we have actual nanofactories, able to produce things such as flexibly configurable utility fogs, then we will need advanced AI systems to create detailed plans for the nanofactories, and to guide the utility fogs in their self-organizing rearrangements.

Ben Goertzel: Nanotechnology started out with some pretty grand visions (though, I think, realistic ones) – visions of general-purpose manufacturing and computation at the molecular scale, for example.  But on your website you carefully contrast “real nanotechnology, i.e. molecular machines (as opposed to films and powders re-branded "nanotech" as a buzzword).”  How would you contrast the original vision of nanotech as outlined by Richard Feynman and Eric Drexler, with the relatively flourishing field of nanotech as it exists today?

J. Storrs Hall: Feynman's vision of nanotech was top-down and mechanical.  With the somewhat arbitrary definition of nanotech as used these days, progress toward a Feynman-style nanotech won't be called "nanotechnology" until it actually gets there – so you have to look at progress in additive manufacturing, ultra-precision machining, and so forth.  That's actually moving fairly well these days.  (See http://www.scientific.net/AMR.69-70 for a sampling of recent technical work on ultra-precision machining, or Google “ultra-precision machining” for a list of companies offering services in this area.)

Drexler's original approach to nanotech was biologically based – most people don't realize that because they associate him with his descriptions of the end products, which are of course mechanical.  There's been some fairly spectacular progress in this area too, with DNA origami and synthetic biology and so forth.

I expect the two approaches to meet in the middle sometime in the 2020s.

BG: I see, and when the two approaches meet, then we will definitively have “real nanotechnology” in the sense you mean on your website.  Got it.  Though currently the biology approach and the ultra-precision machining approach seem quite different in their underlying particulars.  It will be interesting to see the extent to which these two technological approaches really do merge together – and I agree with you that it’s likely to happen in some form.

Next question is: in terms of the present day, what practical nanotech achievements so far impress and excite you the most?

JSH: Not much in terms of stuff you can buy – although virtually every chip in your computer is "nanotech" by the definition of the NNI (the National Nanotech Initiative), so if you go by that you'd have to include the entire electronics industry.

BG: OK, not much in terms of stuff you can buy – but what progress do you think has been made, in the last decade or so, toward the construction of “real nanotechnology” like molecular assemblers and utility fog?  What recent technology developments seem to have moved us closer to this capability?

JSH: Well, in the research labs you have some really exciting work going on in DNA origami, manipulation / patterning of graphene, and single-atom deposition and manipulation.  In the top-down direction – "Feynman’s path" – you have actuators with sub-angstrom resolution and some pretty amazing results with additive e-beam sintering.

BG: What about utility fog?  Are we any closer now to being able to create utility fog, than we were 10 years ago?  What recent technology developments seem to have moved us closer to this capability?

JSH: There's actually been some research projects in what's often called "swarm robotics" at places like CMU, although one of the key elements is to design little robots simple and cheap enough to build piles of without breaking the bank.  I think we're close to being able to build golf-ball-sized Foglets – meaning full-functioned ones – if anyone wants to double the national debt.  You'd have to call it "Utility Hail", I suppose.

BG: OK, I see.  So taking a Feynman-path perspective, you’d say that right now we’re close to having the capability to create utility hail – i.e. swarms of golf-ball sized flying robots that interact in a coordinated way.  Nobody has built it yet, but that’s more a matter of cost and priorities than raw technological capability.  And then it’s a matter of incremental engineering improvements to make the hail-lets smaller and smaller until they become true foglets.

Whereas the Drexler-path approach to utility fog would be more to build upwards from molecular-scale biological interactions, somehow making more easily programmable molecules that would serve as foglets – but from that path, while there’s been a lot of interesting developments, there’s been less that is directly evocative of utility fog.  So far.

JSH: Right.  But things are developing fast and nobody can foresee the precise direction.

BG: Now let’s turn to some concrete nanotech work you played a part in.  You did some great work a few years ago with NanoRex, pioneering Computer Aided Design for nanotech.  What ultimately happened with that?  What's the current status of computer aided design for nanotechnology?  What are the main challenges in making CAD for nanotech work really effectively?

JSH: The software we built at NanoRex – NanoEngineer-1 – is open source and can be gotten online if anyone wants to play with it.

But I think it’s really too early for nano-CAD software to come into its prime, since the ability to design is still so far ahead of the ability to build.  So software like NanoEngineer-1 where you could design and simulate gadgets from Nanosystems has no serious user base.  Yet.  And I’d say the same is true of other nano-CAD software that has sprung up recently.

One exception is the software that allows you to design DNA origami and similar wet approaches.  But most of this is research software since the techniques are changing so fast.

There will definitely be a future for advanced nano-CAD software, but any major push will have to wait for the ability to build to catch up with the ability to design.

BG: Speaking of things that may be “too early,” I wonder if you have any thoughts on femtotechnology?  Is the nanoscale as small as we can go engineering-wise, or may it be possible to create yet smaller technology via putting together nuclear particles into novel forms of matter (that are stable in everyday situations, without requiring massive gravity or temperature)?

JSH: I don’t have any particular thoughts on femtotech to share.  The thing about nanotech and AI are that we have natural models – molecular biology and human intelligence – that show us that the goal is possible.  We don't have any such thing for femtotech.

BG: OK, fair enough.  Hugo de Garis seduced me into thinking about femtotech a bit lately, and I’ve come to agree with him that it may be viable – but you’re right that we have no good examples of it, and in fact a lot of the relevant physics isn’t firmly known yet.  Definitely it makes sense for far more resources to be focused on nanotech, which is almost certainly known to be possible, so it’s “just” down to engineering difficulties.  Which we seem to be making good progress on!

So, switching gears yet again… you've done a lot of work on AGI as well as on nanotech – as you know my main interaction with you has been in your role as AGI researcher.  How do these two threads of your work interact?  Or are they two separate interests?  Are there important common ideas and themes spanning your nanotech and AGI work?

JSH: There's probably some commonality at the level of a general theory of self-organizing, self-extending systems, but I'm not sure there's so much practical overlap in the near term of developing either one in the next decade.  Even in robotics the apparent overlap is illusory: the kind of sensory and cognition-driven robots that are likely to be helpful in working out AGI are quite distinct from the blind pick-and-place systems that will be the first several generations of nanotech automation, I'm afraid.

BG: And what's your view on the potential of AGI to help nanotech along?  Do you think AGI will be necessary in order to make advanced nanotech like utility fog or molecular assemblers?

JSH: Not to build them but to use them to anywhere near their full potential.  With either utility fog or nanofactories (and also with biotech and ordinary software) you have access to a design space that totally dwarfs the ability of humans to use it.  It’s easy to envision a nanofactory garage: each time you open the door you find a new car optimized for the people riding, the trip you're taking, the current price of fuel, and the weather.  But who designs it; who designs the new car each time?  You need an AI to do that, and one with a fairly high level of general intelligence.

BG: Yes, I agree of course.  But what kind of AGI or narrow AI do you think would be most useful for helping nanotech in this way – for designing plans to be used by nanofactories?  Do you think AI based closely on the human brain would be helpful, or will one require a different sort of AI specifically designed for the kinds of reasoning involved in nanotech design?  If we make an AI with sensors and actuators at the nano-scale, will its cognitive architecture need to be different than the human cognitive architecture (which is specialized somewhat for macro-level sensors and actuators)?  Or can the nano-design-focused AGI have basically the same cognitive architecture as a human mind?

JSH: I think the human brain, while clearly bearing the marks of its evolutionary origin, is a remarkably general architecture.  And the sensors in the human body are more like what nanotech could do than what current sensor technology can do.  You have a much higher bandwidth picture of the world than any current robot – and I think that's a key element of the development of the human mind.

BG: I definitely agree that the human brain gets a higher-bandwidth picture of the world than any current robot.  And yet, one can imagine future robots with nano-scale sensors that get a much higher bandwidth picture of the world than humans do.  I can see that many parts of the human brain architecture wouldn’t need to change to deal with nano-scale sensors and actuators – hierarchical perception still makes sense, as does the overall cognitive architecture of the human mind involving different kinds of memory and learning.  But still, I wonder if making a mind that deals with quantum-scale phenomena effectively might require some fundamental changes from how the human mind works.  I suppose this also depends on exactly how the nanofactories work.  Maybe one could create nanofactories that could be manipulated using largely classical-physics-ish reasoning, or else one could build others that would be best operated by a mind somehow specifically adapted to perception and action in the quantum world.

But as you said about applying nano-CAD, it’s somewhat hard to explore these issues in detail until we have the capability to build more stuff at the nano scale.  And fortunately that capability is coming along fairly rapidly!

So let’s talk a bit about the timing of future technology developments.  In 2001 you stated that you thought the first molecular assemblers would be built between 2010 and 2020.  Do you still hold to that prediction?

JSH: I'd say closer to 2020, but I wouldn't be surprised if by then there were something that could arguably be called an assembler (and is sure so to be called in the press!).  On the other hand, I wouldn't be too surprised if it took another 5 or ten years beyond that, pushing it closer to 2030.  We lost several years in the development of molecular nanotech due to political shenanigans [] in the early 20-aughts and are playing catch-up to any estimates from that era.

BG: In that same 2001 interview you also stated "I expect AI somewhere in the neighborhood of 2010," with the term AI referring to "truly cognizant, sentient machines."  It's 2011 and it seems we're not there yet.  What's your current estimate, and why do you think your prior prediction didn't eventuate?

JSH: I made that particular prediction in the context of the Turing Test and expectations for AI from the 50s and 70s.  Did you notice that one of the Loebner Prize chatbots actually fooled the judge into thinking it was the human in the 2010 contest?  We're really getting close to programs that, while nowhere near human-level general intelligence, are closing in on the level that Turing would have defended as "this machine can be said to think".  IMHO.  Besides chatbots, we have self-driving cars, humanoid walking robots, usable if not really good machine translation, some quite amazing machine learning and data mining technology, and literally thousands of narrow-AI applications.  Pretty much anyone from the 50s would have said, yes, you have artificial intelligence now.  In my book Beyond AI I argue that there will be at least a decade while AIs climb through the range of human intelligence.  My current best guess is that that decade will be the 20s – we'll have competent robot chauffeurs and janitors before 2020, but no robot Einsteins or Shakespeares until after 2030.

BG: Yes, I see.  Of course “AI” is well known to be a moving target.  As the cliché says, once something can be done, it’s miraculously not considered AI anymore.  We have a lot of amazing AI today already; most people don’t realize how pervasive it is across various industries, from finance to military to biomedicine etc. etc.  I don’t really consider chatbots as any indicator of progress toward artificial general intelligence, but I do think the totality of narrow AI progress means something.  I guess with both agree that it’s this general progress in AI-related algorithms, together with advances in hardware and cognitive science, that’s pushing us toward human-level general intelligence.

Although the question of whether robot janitors will come before robot Einsteins is an interesting one.  As you’ll recall, at the AGI-09 conference we did a survey of participants on the timeline to human-level AGI – asking questions about how long it will be till we have it, but also about what kinds will come first.  A lot of participants roughly agreed with your time estimate, and thought we’d have human-level AGI within the next few decades.  But opinions were divided about whether the janitors or the Einsteins will come first – that is, about whether the path to human-level AGI will proceed via first achieving human-like robotic body control and then going to abstract cognition, or whether we’ll get advanced AGI cognition first, and humanlike robotics only afterwards.  It seems you’re firmly in the “robotics first camp”, right?

JSH: Yes, and I explain why in my book.

I expect the progress toward true AGI to follow, to some extent, the evolutionary development of the brain, which was first and foremost a body controller whose components got copied and repurposed for general cognition.  In the 70s robotics was much harder than, say, calculus, because the computers then didn't have the horsepower to handle full sensory streams but math could be squeezed into a very clean – and tiny – representation and manipulated.  Nowadays we do have the horsepower to process sensory streams, and use internal representations of similar complexity.  A modern GPGPU can compare two pictures in the same time an IBM 7090 took to compare two S-expressions.  So robotics is getting easier almost by the day.

But the hard part of what smart humans do is finding those representations, not using them once they're programmed in.  Those AI programs weren't Newtons – they didn't invent calculus, they just used it in a very idiot-savant fashion.  Feynman put it this way, “But the real glory of science is that we can find a way of thinking such that the law is evident.”

We already have well-understood representations for the physical world, and we can just give these to our robots.  What's more, we have a good idea of what a good janitor should do, so we can quickly see where our robot is falling short, and easily evaluate the new techniques we invent to repair its deficiencies.  So I'd expect rapid progress there, and indeed that's what we see.  Once we have the new, working, techniques for things like recognizing coffeemakers, we'll be able to adapt them to things like recognizing promising customers, slacking employees, and so forth that form the meat of average human intelligence.

Perhaps it will be a while later before some robot muses, without being told or even asked, that the quality of mercy is not strained, but that it droppeth as the gentle rain from heaven, upon the place beneath...

BG: Heh.  Yes, I understand your perspective.  As you know I don’t quite agree 100%, even though I think robotics is one among several very promising paths toward human-level AGI.  But I don’t want to derail this interview into a debate on the degree of criticality of embodiment to AGI.

So let me sort of change the subject instead!  What are you mostly working on these days?  AGI? Robotics?  Nanotech?  Something else?

JSH: Back to AGI/robotics, after a detour into general futurism and running Foresight.  Same basic approach as I described in my book Beyond AI and my paper at AGI-09 (see also the online video of the talk), with an associative memory-based learning variant of Society of Mind

Oh, and on the side, I’m trying to write a science fiction novel that would illustrate some of my machine ethics theories.  And I should also mention that I have a couple of chapters in the new book Machine
coming out from Cambridge later this year.

BG: Lots of great stuff indeed!  I’m particularly curious to see how the next steps of your AGI research work out.  As you say, while the nanotech field is advancing step by step along both the Feynman and Drexler type paths, once we do have that nanomanufacturing capability, we’re going to need some pretty advanced AIs to run the nano-CAD software and figure out exactly what to build.  Maybe I can interview you again next year and we can focus on the progress of your AGI work.

JSH: I’m looking forward to it!


    Early machine intelligence will control sensors and actuators well enough - but much of their influence on the world will be via human beings. We will be their robot bodies - using a Google / Android - like model - while the real robots are still too clunky.

    Glad to see an article on "morphological matter" by some well known names, since I'm getting tired of being dismissed as a loon every time I've tried to discuss Utility Fog, Claytronics, or Wellstone. Try explaining to someone that morphological matter will eliminate the entire concept of "locking up material resources into static forms for all eternity" and too many people's brains seem to short out.

    The sourceforge project page for NanoEngineer is old. I would instead recommend using these:

    1) the git repository

    2) the mailing list for development discussion

    3) the wiki

    4) and if you want visual bling:

    - Bryan
    1 512 203 0507

    That can reshape into a car in order to suit the current price of gas?

    I know its just one part of the article, but wouldn't a car's worth of mass of utility foglets use just a little bit more energy than a tankful of gas???

    Come to think of it, how would a utility foglet work in the following ways?

    -Energy source? - Burn petrol? Fuel Cell? Batteries? How would they refuel?
    -Sensors? - How would any individual element know where it was relative to other objects?
    -Repair? - Whats to stop such small machines getting damaged by the environment all the time?
    -Control? - How does something that small control itself and coordinate with billions of others?
    -Expense? - Its this something that would ever really be economically affordable, assuming all the other problems are answerable?

    Until people can answer these questions, this type of nanotechnology sounds like total fantasy to me.

    The Matrix has you.

    That example is utterly ridiculous, but is an attempt to put it in terms the average person will understand.

    Simply put, a UF car would have NO NEED to run on gasoline. As for your other questions, Josh has actually answered many of them in his more technical pieces on UF. I recommend googling it.

    The "I can't be bothered to actually do research" answers are:

    Energy Source: There's quite a range of possibilities, from Ultracap batteries to solar, to a daisy chain network to the grid.

    Sensors, repair, and control: Since they are part of a massively parallel network, with each unit being just a single element in the whole, they would be co-ordinated by an overall program. There are any number of sensors that already exist that could insure positional accuracy, and units that are not functional would be shuffled out of the mass by functional units. Josh's designs are based on silicon and carbon, which would be far more robust than a typical airborne organism, and since it would control the environment within the mass, would face little "environmental hazard."

    Expense: I rather strongly recommend looking up "Additive manufacturing" AKA 3D printing. Josh is not discussing using currently commonplace manufacturing methods. He is discussing the likely refined manufacturing units that will exist in 10 to 20 years, in which either molecular scale or atomic scale precision is possible. At present, a golf ball sized unit could be made, but the cost per unit would be prohibitive. Five years from now, that cost could be halved, in ten it could be negligible. Why? Because the manufacturing base is presently in transition between subtractive manufacturing and additive manufacturing. Once additive manufacturing has become standard, the overwhelming majority of manufactured goods will rapidly drop in price because of elimination of waste, overproduction, and massive integration. A 3D printer could manufacture a product using minimal resources, using ultrastrong materials like graphene to create "space filling" grids to make products that are tougher than steel but are 90% open space. It would be able to make a product in minutes, on demand, eliminating the need to manufacture millions of units in hopes that they will all sell. And it could embed electronics into nearly anything, enabling previously impossible possibilities, like vinyl siding for your home that doubles as a solar panel and which can change color and patterns on demand.

    So basically, your argument is "we can't do it now, so it's a fantasy." But sadly, all it really says is that you failed to do your research.

    Here's the Google top five to get you started:


Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>


Join the h+ Community