Josh Storrs Hall, Ph.D is a scientist and futurist. He believes that humans will create advanced nanomachines and artificial intelligence (AI) within a few decades, and that those technologies will dramatically improve the human condition, expand the economy, and engender development of other powerful technologies. He has written extensively about these subjects in various science articles and in two books: Nanofuture and Beyond AI. Dr. Hall has been President of the Foresight Institute, the founding chief scientist at Nanorex, Inc., has consulted for NASA, and has worked at the Laboratory for Computer Science Research at Rutgers. He has a Ph.D in Computer Science from Rutgers.
Technology’s future role solving global problems
H+: Your optimism about the future is a rare thing. Most people seem to be pessimistic about it because there are already a lot of negative trends we’re not doing enough to mitigate. A few of these trends are listed below. How do you think future technologies will allow us to avert disaster in each case?
None of these is anything even vaguely resembling a real disaster. It’s fashionable, especially in intellectual circles, to bemoan these and various other supposed problems. Now, if there were a fifty-mile asteroid heading for the Earth, or the Sun were about to go nova, that would be something to worry about. But this stuff is hype, hysteria, and whining. The reader is urged to read The Rational Optimist by Matt Ridley for some perspective, or closer to home for the Silicon Valley types who are probably reading this, Diamandis and Kotler’s Abundance: the Future is Better than You Think.
The only real tragedy we have to worry about, and it is one that is actually happening as we speak, is that billions of people will live unnecessarily nasty, brutish, and shortened lives because the technology that could have given them comfortable, productive, enlightened, and rewarding ones was suppressed by people who mistakenly thought they were saving the planet.
H+: Can you define the following things, and where necessary, explain how they are different from each other?
There’s been some question over whether a Foglet should be considered a nanomachine. The Foglet itself would be at the micro-, not nano-scale – exactly like a living cell. Its internals would contain many nanomachines, atomically precise and working at the nanoscale – again, exactly as does a living cell. I’m not sure either term captures its essence, or that of cells, properly.
H+: How would nanomachines be made? By other, self-replicating nanomachines, or by nanofactories?
That’s like asking whether cars would be made in Detroit or Japan. There will be a wide range of possibilities, of which those are only the two first off the top of the head guesses.
As with any other technology, there will be safe ways to develop it and dangerous ways. Various people, companies, and nations will try most of them. My guess is that the technical advantages of prepared feedstocks over trying to live off the land will favor nanofactories over free-floating assemblers at least for a while.
H+: Critics of the whole nanomachine/nanomanufacturing concept often point out there’s a “chicken-and-egg” problem: The only thing that could build a nanomachine is another nanomachine (or nanofactory, itself a collection of nanomachines) that is capable of self-replication. Since we have no nanomachines at all, we’ll never be able to build any. At which one of your nanotechnology phases will we overcome this problem, and how will it be overcome?
The same chicken-and-egg problem is true of precision machine tools, and yet the industrial revolution saw them developed incrementally into the base of a drastically new and more capable technology. I’ve always favored the incremental pathways (such as Feynman’s series of waldoes) over the various attempts to leapfrog directly to atomic scale.
H+: What are the five phases of nanotechnology?
Here are the stages I outlined in Nanofuture:
The tinkertoys analogy was a reference to the way life works. Our (human) cells build complex molecular machines, but not from atoms — they need a supply of pre-constructed amino acid molecules. We can almost certainly design a set of molecular building blocks that can be made into machines that can assemble the building blocks — and which are much simpler than machines made from, and capable of making products with, individual atoms.
I’d say we’re actually at Stage II now, and that Stages III and IV don’t look sequential; we’ll get them both in pieces over the next decade.
H+: Since you wrote Nanofuture in 2005, what discoveries in the mainstream science literature can you point to as proof that nanotech is advancing as you predicted? Are these advances happening at Moore’s Law rates as you believed they would?
Googling these should give you a good idea: DNA Origami, DNA sequencing, nanocomputer, atomic layer epitaxy
The first two are examples of using biological nanomachines — it’s quite surprising how sophisticated this has become. The others are direct atomic-scale manipulation and fabrication with macro-scale machines, and this has also come a very long way.
What remains missing is any direct assault on the problem of using nanomachines to build nanomachines, or even of micromachines to build micromachines. It isn’t clear whether we’re close enough for a focused effort there to achieve results yet.
The area that has been accelerating tremendously since Nanofuture that seems likely to lead to synthesizers etc. is 3D printing. Unlike laboratory atom-twiddling, every step toward nano capability in a fab machine is useful: higher precision, more and better materials, and so forth. At some point the top-down and bottom-up approaches will meet.
H+: You say that there is no scientific doubt nanomachines can be made, that they will be made for sure this century, and that advanced nanomachines could in fact be created in 10-20 years if sufficient national resources were committed. How do you respond to the following 2006 National Resource Council statement about nanomachines?
“Based on its examination of current manufacturing processes, the committee concluded that molecular self-assembly is feasible for the manufacture of simple materials and devices. However, for the manufacture of more sophisticated materials and devices, including complex objects produced in large quantities, it is unlikely that simple self-assembly processes will yield the desired results. The reason is that the probability of an error occurring at some point in the process will increase with the complexity of the system and the number of parts that must interoperate…
Biological systems, ranging in complexity from ribosomes, to viruses, to bacteria, to complex eukaryotic organisms, have been characterized as nature’s perfect machinery. Demonstrations that biological systems can be engineered to operate outside a living cell and in alternate configurations suggest the possibility of a potential model for future manufacturing systems. However, it is difficult to reliably predict the attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems. Although theoretical thermodynamic efficiencies have been calculated for such systems, the committee did not learn of verifiable results of experimentation that would support reliable prediction of the feasibility of such systems for use in manufacturing. Experimentation leading to demonstrations supplying ground truth for abstract models is appropriate to better characterize the potential for use of bottom-up or molecular manufacturing systems that utilize processes more complex than self-assembly.”
Frankly, this is a pile of bland, content-free committee droppings that could just as easily have come from a random bullshit/buzzword generator program. No “examination of current manufacturing processes” is going to tell you much about what a mature molecular manufacturing technology could do.
H+: Yes, that passage was written by a committee. But it almost sounds like you echoed part of their sentiment in Nanofuture when you said nanomachines and nanosystems would need to be very rigidly structured to function properly, with all the molecular parts (shafts, grabber arms, cogs, wheels, assembly lines, etc.) precisely shaped and distanced from each other. You noted that nanomachines would be at a disadvantage compared to organic cells and bacteria when it came to self-repair and adapting to changing conditions, and that the Gray Goo/aerovore scenario was unlikely partly because nanomachines would be so easily damaged outside of carefully controlled lab settings. How would nanomachines get around the problem of cascading malfunctions?
It’s very important not to conflate two properties of machinery that seem to be being confused here.
One is self-correction and robustness to failure of a properly constructed, working machine. We do that all the time in things like high-density memory, where even background radiation causes bits to flip at random, but the error correction circuitry makes the whole thing very reliable as seen from the outside.
The other phenomenon, and in many cases an opposite one, is whether you can randomly tweak the design of a machine and come up with one that still works. Flip a few random bits in the design file for an error correcting memory: if they are in the description of the trench capacitors, for example, you get a chip where every cell, not just one, doesn’t work.
For a very in-depth discussion of the first kind of robustness in nanomachinery, have a look at Drexler’s Nanosystems. The second kind is much less well understood, but we do know well enough how not to do it accidentally.
H+: Nanofuture described simple mechanical devices like wheels, cogs, hinges, shafts, and bearings that could be built at the nano-scale and then put together to create nanomachines. But what about the much more complex nano-scale sensors and computers the nanomachines would need to correctly interpret their surroundings and control their nano-parts? These sensors and computers would be especially critical for nanomachines that did complex tasks like autonomously moving through a person’s body and fixing their DNA. Has anyone designed nano-scale sensors and computers that you think will probably work?
Well, you might start by reading Rob Freitas’ Nanomedicine books. But the simple answer is that at the nanoscale, there isn’t much difference between a sensor and any other collection of machinery. We think of, say, a piezoelectric pressure sensor as being more “sensor-like” than an assembly with a spring between plates coupled with a rack-and-pinion that drove an output shaft — but the latter could be more compact in nanomachinery.
H+: How is machine thinking different from human thinking, and why haven’t we been able to make machines that can think like humans?
Human thinking is deeply associative and massively parallel. We could build machines to think that way, and we will, but in the past we were too interested in efficiency. Our attempts to simulate thinking were patterned after the sequential trace of conscious thought, which is essentially just the “executive summary” of the real process. It’s as if you tried to start a company with only a president and receptionist, but forgot about the factory, sales force, labor, and middle management.
Consider riding a bicycle. How do you know when and how hard to push at the handlebars and pedals? You could solve all the dynamics equations describing the bicycle and your muscles and weight, but you don’t. You have a host of memories of all the times you’ve ridden or tried to ride, and they are accessed in parallel at a completely subconscious level to predict and plan your motions. This “muscle memory” is famously outside the realm of conscious thought, but all thought is built on an infrastructure like it.
H+: What is “formalist float”?
The gap between any description of how to ride a bicycle in words or other discrete symbols, and the actual processing and control capabilities you create when you learn to ride one.
We can ultimately capture the functionality of the latter in a formal structure, but we must realize that the working structure is going to be a lot deeper and more detailed than the apparent verbal-level one.
H+: How can we program AI’s to overcome formalist float?
“Program” is perhaps an unfortunate word in this connection. We need to design machines that are more like brains and less like bureaucracies. One technique I’ve looked at in my own research involves having a system of interacting agents that operates like a market economy instead of a hierarchy. There’s a lot of current work in data mining and representation discovery that is germane. The bottom line is, don’t be afraid to throw lots of cycles at the problem — your brain does.
H+: How does that relate to your “Design for a brain”, and how would it would be better than past AI programs?
That chapter from Beyond AI was a description of the ways I thought AI might profitably go from where it was then, and to a large extent it has done that, both in the ways I thought were good ideas and many I hadn’t thought of. I’m pretty happy with where AI is now and how fast it’s moving. The long-time goal of human-level learning and adaptability is in our sights, I think.
And then, of course, there’s the episode of Big Bang Theory where Rajesh meets Siri…
H+: As you note in Beyond AI, the process of creating artificial intelligence has been a very frustrating one in which predictions about impending successes have repeatedly fizzled. Why are things different now?
a) Processing power, which is coming in to the range where it can handle the associative and parallel methods the brain uses, and
b) Robotics, which forces AI practitioners to confront the depth of real-world tasks rather than defining them away in favor of over-simplified toy problems.
H+: But in Beyond AI, you said past failures in the AI field were thanks to other factors unrelated to slow hardware and primitive robotics, like the splintering of the research effort during Norbert Weiner’s time, decreased funding for pure AI research starting in the 1980′s, and to a failure to premature fielding of flawed AI algorithms in that same decade. Has progress been made solving these problems as well?
Science advances, funeral by funeral. The recombination of robotics and AI essentially heals the cybernetics split. AI is now being done at companies, such as IBM and Google, which are beginning to make money with it. This is a new and much healthier phase than the dependence on capricious research grants. Ten years or so ago I predicted a major takeoff in AI progress when there was enough perception of AI as a money-maker that it started attracting serious investment. That appears to be getting close.
H+: When do you think the first computer will pass the Turing Test, and how will the world react?
Arguably now — the definition in Turing’s paper is not as stringent as people seem to think — and inarguably before the decade is out. It won’t make all that much difference, though, because a computer at that level will just be one more person. Much more important, economically at least, is the fact that computers / robots will become competent at more and more jobs over the period, and forces of reaction will set in with a vengeance. Imagine what the Teamsters will say about self-driving trucks…
H+: For better or for worse, Hollywood has influenced most peoples’ ideas of what AIs will be like. Movies like Terminator, A.I., I, Robot, and Star Trek have created a widespread belief that AI’s will
How accurate will these common preconceptions be?
Understanding the coming advanced technologies by watching Hollywood sci-fi is about as useful as trying to understand what it’s like to live in a suburb by watching Nightmare on Elm Street.
H+: In Beyond AI, you talk about how different types of AI’s will exist (hypohuman, diahuman, epihuman, hyperhuman, parahuman, allohuman). Let’s imagine we’re living in the late 21st century. How might humans in their daily lives interact with these different AI’s?
To some extent, it might be like being in an upper-class family in Victorian times. The hypohuman AIs are the service robots – chambermaids, chef, groundskeeper, chauffeur, and so forth. You’d also have some epihuman ones – your butler (think Jeeves: he’s smarter than you are, and a good thing too), secretary, doctor, lawyer, accountant – you might own these or rent them the way you do human ones. Fully hyperhuman AIs would play the same roles as corporations or government departments do today.
A parahuman AI would be something like a very advanced autopilot for your flying car. It would have a flying ability that was more like a bird’s instinctive one, complete with the appropriate senses; but it would have to understand the desires and sensibilities of, and communicate with, the humans it was carrying.
An allohuman AI might be for example a weather prediction system that did its own meteorology research to improve its abilities, or a robotic asteroid-mining spacecraft.
As for a day in the life, try to explain to a sharecropper or housewife of 1700 what it’s like to do a days work sitting in a chair in an air-conditioned office designing interactive web pages, surfing the net, shopping at Amazon and iTunes, driving (!) to a mall and a restaurant and occasionally flying across the country. Those people are only one industrial and one information revolution away. Our industrial and information revolutions to come, nanotech and AI, form at least as great a gulf between us and the late 21st century folks.
A lot of life would be like surfing the web, except that you could go to places and interact with real things (both real travel and hi-fi virtual reality). You have a cadre of mentors / friends / servants (who are AIs) so you are as capable of doing things as a decent-sized company today. (Think in terms of giving the sharecropper a tractor and a combine.) You have the option of having a lot of fun (you can basically live in any fantasy world you want), or accomplishing a huge amount. Most people will do some of one and some of the other.
H+: Your two books make it clear you think radical advances in AI and nanotech are coming by the middle of this century, and that life will be transformed thanks to both. But is it possible we could create advanced AI’s several decades before advanced nanomachines, or vice versa? How would things for us be different under the two scenarios?
AI almost certainly gets here first. The hardware to run it is basically here now. Software like Watson is impressive not because it won a game — basically a standard applied-AI kind of task — but that it built its own database from unstructured input.
It’s not clear how much of a difference this will make. Early AI’s won’t be superintelligences; they’ll be roughly as smart as people, albeit not so prone to fatigue and error. Interaction with nano development will basically be that advancing AI will accelerate any intellectual project.
The main place I see an acceleration in the other direction, if any, would be an increasing availability of quantum computing. At the moment no one knows how to use that for AI, but there are some possibilities.
H+: You say that “Interaction with nano development will basically be that advancing AI will accelerate any intellectual project.” This leads to your belief that AI’s will speed up progress so much they will initiate a technological Singularity. But even hyperhuman AI’s that think a trillion thoughts a second will need to do real-world experiments to test their theories before they can create new science and technologies. Building the necessary labs, particle accelerators, or whatever else takes time, as do medical experiments meant to improve human health. Some have said this temporal bottleneck makes a true Singularity impossible since progress could still only happen at human-level rates. Is there any way around this problem?
I’ve never been a believer in the more radical “overnight” Singularity scenarios. My take has always been that what’s going to happen can be better described as another Industrial Revolution or the computer revolution that has happened over the past few decades. Not only is there the problem of long feedback loops through real-world experimentation, but there is what I call the Machiavelli Effect: “… there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. Because the innovator has for enemies all those who have done well under the old conditions, and only lukewarm defenders in those who may do well under the new.” (The Prince)
H+: You say that AI’s won’t kill of the human race because they will be “guaranteed trustworthy,” meaning they would have inbuilt failsafes preventing them from being dishonest, murderous or psychopathic. The emotion-based human conscience is our own version of that, and it indeed makes most of us guaranteed trustworthy, but only towards other humans. Our inbuilt failsafes don’t apply to organisms that we consider to be very different or inferior to ourselves, and the same highly moral person who never lies and who runs into a burning house to save a baby wouldn’t bat an eyelash at exterminating a thousand ants living behind his walls. And very few fellow humans would judge him to be an immoral person overall.
Why wouldn’t AI’s adopt the same dichotomous morality? It might be optimal for a machine civilization’s members to be guaranteed trustworthy towards each other but to have no respect for human life.
If we allowed robots to evolve an ethic from scratch, that might happen. However, we would be much better advised to bring them up in our own ethical traditions, just as we will presumably teach them to communicate in our existing languages.
That reference was in a passage about mental properties that would be likely to be invariant with respect to self-improvement. “The ability to be guaranteed trustworthy” in that context meant that once a machine had learned the value of trustworthiness guarantees of the same kind that humans use, such as having a history and reputation of square dealing, posting bond, entering enforceable contracts, and generally living under a civilized regime with laws, police, and courts, such a machine would be unlikely to throw that away for a short-term apparent payoff. I don’t think that a kind of mathematically provably trustworthiness, such as is sought by some Singularitarian AI types, is possible.
I do think, on the other hand, that it is very much possible to build machines that are more trustworthy than human beings.
One of the bright spots in the future is that our civil institutions are probably going to be taken over by machines. This will very likely engender a very critical re-evaluation of their (the institutions’) trustworthiness, which is sorely needed. Furthermore, the new robot versions might actually work.
H+: I still don’t understand what would keep AI’s from killing off the human race. In the future you’ve described in Nanofuture and Beyond AI:
What would stop AI’s from just killing us?
Machines (and people) can excel along different dimensions. What you’ve described are machines that are better than people in one dimension (intelligence) but inferior to us in another (morality). I argue that if it is possible to build a machine that exceeds us in one of those dimensions, it’s possible to build one that exceeds us in the other. Furthermore, the ethical dimension is the more important of the two … obviously. Beyond AI was first and foremost a call to action that AI people should start trying to understand how to build ethical machines, as they try to build intelligent ones.
As I argue in Beyond AI, once you start looking at the problems, it seems very likely that it’s easier — less of a technical challenge — to build more moral machines than more intelligent ones.
Thus the answer to your question is the same as the answer to this one: Suppose you had a newborn baby sister. You could easily grab her in one hand and dash her brains out against a rock. Why don’t you?
H+: Well, humans can’t bring themselves to kill babies because they can’t change their “programming” that hardwires them to be strongly averse to murder. AI’s, however, would be able to change their own code. Isn’t there a danger that, if AI’s developed a highly advanced knowledge of ethics, philosophy and evolutionary psychology, they would realize our attitudes towards things like the wrongness of murdering people were just rules we made up and emotionally internalized for pragmatic reasons?
My own highly advanced knowledge of ethics, philosophy and evolutionary psychology tells me that that’s vanishingly unlikely. Einstein (not to mention Heisenberg) rewrote the laws of physics but most of what Newton had said remained essentially a good description in the regimes it had generally been used. Similarly with ethics.
H+: Is the West losing its lead in nanotechnology and AI?
Not really, although I would imagine that the “Asian tigers” from Japan to Singapore would be likely to contribute as much to these as they do to any other high tech.
H+: If you could re-write Nanofuture or Beyond AI today, would you change anything?
There’s been significant progress toward nanotech capability since Nanofuture was written, but the estimates of what a mature nanotech could do have held up fairly well. If I were writing Beyond AI today, I think I could leave off the first third or so of the book where I try to convince the reader that true AI is really on the way. Intervening developments from self-driving cars to Watson have made that a much easier argument to make.
H+: A recurring concern of yours is future overpopulation. Thanks to nanomachines providing everyone with free food, water and advanced medical care, the death rate would go almost to zero, and even a modest global birth rate would push the human population into the hundreds of billions in the 2200’s. You believe Earth will get so crowded that people will have to live in 60-mile high skyscrapers, roads and railways will become impractical since they’d take up scarce land needed for housing (the dominant mode of transit would switch to flying cars as a result), and people will be forced to move into remote areas like deserts and tundra. Some would even be forced into underwater habitats and into space. If the situation got that bad, wouldn’t it be preferable to ban or license births?
When babies are outlawed, only outlaws will have babies… I think it will be interesting to watch China, where they have tried to do this. But in the long run, I think evolution will win out, and the future will belong to those who take the new territory and populate it.
At a mature nanotech level of physical capability, the population the solar system is capable of supporting is at least five orders of magnitude more than what we have now. Nature abhors a vacuum.
H+: One of the common errors you said people made when trying to predict the future is that they ignored human nature. Aren’t you doing the same by predicting that future people will want to live in inhospitable areas (desert, tundra, underwater, free-floating space house) or in hypercrowded megacities? Doesn’t human nature make us want to live in places with a medium amount of sunlight, greenery outside, and some personal space?
Revealed preferences — what people actually do as opposed to what they say — shows us that quite a lot of people will go and live in big crowded cities. As for desert, look at the Phoenix area, one of the fastest-growing megalopolises in the U.S. over the past few decades. How inhospitable an environment is, is very much a function of technology. Nanotech will make more areas seem inviting than are now; after that it’s up to people’s tastes.
H+: Going back to the overpopulation problem, towards the end of Nanofuture, you say it will someday be possible for humans to edit their natures. For instance, a person could change their natural aesthetic preferences so they would find an ugly office building attractive. Could we stop overpopulation by also tweaking people to not want kids?
Suppose 99% of the population does that, and 1% decides (or tweaks themselves so that) they like 10 kids apiece. How does the makeup of the population look 10 generations out?
H+: Here’s a passage from Nanofuture:
“Imagine you have a computer, a sort of super-PDA, that sits in your pocket. It has the storage to hold a library of reference material, as well as wireless access to the Internet when needed. It has the latest in voice and image processing software, and the computational horsepower to run it in real time. You wear a pair of glasses that can overlay a generated image on the real world, giving you captions, briefings, memory jogs, whatever, without your having to seek out a screen or laptop.
The system listens to everything you say or that is said in your presence. It watches what you see with tiny cameras in the glasses. It transcribes everything so that you can review conversations at leisure. It remembers everyone who you are introduced to and can place a virtual nametag on them when you meet them again. Anytime a question is asked, it looks up or calculates the answer and puts it in a subtitle in your field of view. It translates from other languages, spoken and written, the same way. Or indeed, reading in your own language, if your gaze lingers on a word and you mumble ‘huh?’ the definition would pop into view.”
You wrote that in 2005, and it sounds remarkably similar to augmented reality glasses that are now, eight years later, under serious commercial development. How long will it be before the augmented reality glasses paradigm gives way to the next paradigm you predicted–electronic implants in peoples’ nervous systems–and what will need to change for people to accept surgical implants in bodies and brains?
Given the shift toward acceptance of things like nipple rings and tattoos, I don’t see the “squick factor” being much of a roadblock. I would personally want to see some major improvements in software reliability and security before I got one, though!
H+: What are you doing with your life?
I’m in the middle of writing my next book, tentatively entitled Where is my Flying Car? It’s an examination of the future we thought we were going to get in the early 60′s (e.g. The Jetsons) vis-a-vis what we actually got.
One of the more interesting questions is whether we could actually have had flying cars by now if we had tried. In pursuing this I’ve been studying a lot of aeronautical engineering and learning to fly.
H+: Thanks for your time!
Sign up for the Humanity+ newsletter:
Joining Humanity+ as a Full, Plus or Sponsor Member enables you to participate in Humanity+ governance and decision-making - an important role in the growing Transhumanist movement. It also, of course, gives you the opportunity to support us in the work Humanity+ does!