Going Soft on Nanotech
Why MNT nanomachines won’t work, but there’s still plenty of room at the bottom
An Interview with Dr. Richard A.L. Jones
Richard Jones has a first degree and PhD in Physics from Cambridge University. After postdoctoral work at Cornell University, he was a lecturer in physics at Cambridge for eight years before moving to Sheffield University as a Professor of Physics He is a Fellow of the Royal Society and is a Council member of EPSRC, the UK government body that funds physical science and engineering. He blogs about nanotechnology and science policy at www.softmachines.org, and his book “Soft Machines: nanotechnology and life” is published by Oxford University Press.
H+: You’re a critic of Eric Drexler–one of the most recognizable names people associate with nanotechnology. You say you like Drexler’s ideas from Engines of Creation (1986) but dislike the ideas from his Nanosystems (1992). What’s the difference?
RJ: I’m both a fan of Eric Drexler and a critic – though perhaps it would be most correct to say I’m a critic of many of his fans. Like many people, I was inspired by the vision of Engines of Creation, in outlining what would be possible if we could make functional machines and devices at the nanoscale. If Engines set out the vision in general terms, Nanosystems was a very thorough attempt to lay out one possible concrete realisation of that vision. Looking back at it twenty years on, two things strike me about it. One was already pointed out by Drexler himself – it says virtually nothing about electrons and photons, so the huge potential that nanostructures have to control their interaction, which forms the basis of the actually existing nanotechnologies that underlie electronic and optoelectronic devices, is unexplored. The other has only become obvious since the writing of the book. Engines of Creation draws a lot on the example of cell biology as an existence proof that advanced nanoscale machines, operating with atom precision, can be made. This represents one of Drexler’s most original contributions to the formation of the idea of nanotechnology – Feynman’s famous 1959 lecture, in contrast, had very little to say about biology. Since Nanosystems was written, though, we’ve discovered a huge amount about the mechanisms of how the nanomachines of biology actually work, and even more importantly, why they work in the way they do; what this tells us is that biology doesn’t use the paradigm of scaling down macroscopic mechanical engineering that underlies Nanosystems. So while it’s right to say that biology gives us an existence proof for advanced nanotechnology, it doesn’t at all support the idea that the mechanical engineering paradigm is the best way to achieve it. The view I’ve come to is that, on the contrary, the project of scaling down mechanical engineering to atomic dimensions will be very much more difficult than many of Drexler’s followers think.
H+: Does that mean you think it is impossible to make MNT-based nanomachines?
RJ: No, not impossible. But compared to the expectations of their more enthusiastic proponents, I think they will be very much more difficult to make, they will probably need special conditions such as ultra-high vacuum and very low temperatures, and as a result their impact will be considerably less than popularly envisaged.
H+: Why would Drexler’s nanomachines need ultra-low temperatures and vacuum conditions to operate?
RJ: Low temperatures are needed to suppress Brownian motion, and the consequential wobbling around of nanoscale mechanisms and machines, which will hinder their precise operation. The amplitude of this wobbling scales (for classical systems) as the square root of absolute temperature, so to reduce it by a factor of 2 needs a temperature of 75 K (i.e. liquid nitrogen temperature), by a factor of 10 needs a temperature of 3 K (i.e. liquid helium temperature). How much wobbling you can live with is going to depend on the design, of course; macroscopic precision engineering depends on fine tolerances, so one should expect design paradigms based on macroscopic mechanical engineering to be quite sensitive to the effect of positional uncertainty from thermal vibrations.
Ultra-high vacuum will be needed because in ambient conditions (and even more so in aqueous conditions) any molecules that adsorb and get caught up in rolling or rubbing surfaces are at risk of undergoing uncontrolled mechano-chemistry – the forces on the molecules will often be large enough to break covalent bonds, producing reactive species that will greatly increase the friction and lead to damage to the structures.
H+: Even if MNT nanomachines were confined to tightly controlled labs, couldn’t they still lead to a manufacturing revolution in some areas? For example, maybe you could put graphite in one end and have billions of dollars worth of carbon nanotubules come out the other. Even if the nanomachines don’t directly interact with the outside environment, they have a huge impact.
RJ: This depends, on the one hand, how much it costs to develop and maintain nano-machines in this controlled environment, what throughput you are able to achieve, and how the costs would compare with rival products. There are commercial technologies that depend on ultra-high vacuum or very low temperatures, but they are expensive and inconvenient. I doubt that making nanotubes this way would ever be competitive, even if it were possible. But a large-scale integrated opto-electronic quantum computer might be a different matter.
H+: How is the world at the nanoscale different from the world at the macroscale, how are organic nanomachines (i.e. cells and bacteria) adapted to these conditions, and what challenges do they pose to MNT-based nanomachines?
RJ: Biological nanomachines exist in a world that is warm and wet. At the nanoscale, the way physics works in these conditions is very unfamiliar to our intuitions, developed as they are at the macro-scale. The random jiggling of Brownian motion not only makes nanoscale particles move around randomly, it means that any structures have random internal flexings, vibrations and stretchings. On these small scales, surfaces stick to each other, and water behaves like a very viscous fluid – like the stickiest molasses – would at the macroscale. Biological nanomachines have evolved, not so much just to cope with this challenging environment, but to thrive in it, using design principles that are completely unknown in our macroscopic engineering. So biological motors don’t just live with the fact that they are rather floppy and are continuously randomly flexing and writhing, that’s why they work at all. And often they work at extraordinarily high efficiencies.
H+: So if you made one of Drexler’s MNT nanomachines and injected it into a human being’s bloodstream to clean up arterial plaques, what would happen to it in this alien nano-environment?
RJ: To be fair to Drexler, many of the images one sees of medical nanobots are the products of their artists’ imagination rather than any properly worked out nano designs. But to think of the problems that a nanoscale robot designed on the principles of mechanical engineering would encounter in the bloodstream, you need to think of something lacking in rigidity, constantly shaking around due to Brownian motion, and attracting sticky molecules from the environment to every free surface. It’s not an environment that a precisely engineered mechanical design would work well in. A mechanical clock, made of rubber, in a washing machine filled with glue would give you some idea.
H+: In 2005, you proposed six important things MNT proponents could do to bolster the feasibility of MNT. They are listed below. How much progress have they made meeting your challenge?
- Do more detailed and realistic computer modeling of hypothesized nanomachine components (gears, shafts, etc.) to determine if they would hold their shapes and not disintegrate or bend if actually built.
- Re-do computer simulations of MNT nanomachines, this time using realistic assumptions about Brownian motion and thermal noise. The nanomachines’ “hard” parts would be more like rubber, and they would experience intense turbulence at all times. Delicate nanomachine mechanisms could be easily destroyed.
- Re-do nanomachine computer simulations to realistically account for friction between the moving parts and for heat buildup. Heat and vibration could destroy nanomachines.
- Do more detailed computer simulations of Drexler’s nano-scale motor to make sure it would actually work. He never modeled it down to the level of individual atoms. The motor is a critical component in Drexler’s theoretical nanomachine designs as it powers many of the moving parts.
- Design a nano-scale valve or pump that selectively moves matter between the nanomachine’s enclosed inner area and the ambient environment. To be useful, nanomachines would need to “ingest” matter, modify it internally, and then expel it, but they would also have to block entry of unwanted matter that would jam up their exposed nano-moving parts. A valve or pump that is 100% accurate at discriminating between good and bad foreign materials is thus needed.
- Flesh out a convincing, multi-year implementation plan for building the first MNT nanomachines. Either top-down or bottom-up approaches may be pursued. In either case, the plan must be technically feasible and must make sense.
RJ: Not a great deal, as far as I can tell. There was some progress made on points 1, 2 and 3 following the introduction of the software tool Nanoengineer by the company Nanorex, but this seems to have come to a halt around 2008. I don’t know of any progress on 4 and 5. The 2007 Technology Roadmap for Productive Nanosystems from Batelle and the Foresight Nanotech Institute has a good list of things that need to be done to achieve progress with a number of different approaches to making functional nanoscale systems, including MNT approaches, but it does not go into a great deal of detail about how to do them.
H+: What are “soft machines”?
RJ: I called my book “Soft Machines” to emphasise that the machines of cell biology work on fundamentally different principles to the human-made machines of the macro-world. Why “soft”? As a physicist, one of my biggest intellectual influences was the French theoretical physicist Pierre-Gilles de Gennes (1932-2007, Nobel Prize for Physics 1991). De Gennes popularised the term “soft matter” for those kinds of materials – polymers, colloids, liquid crystals etc – in which the energies with which molecules interact with each other are comparable with thermal energies, making them soft, mutable and responsive. These are the characteristics of biological matter, so calling the machines of biology “soft machines” emphasises the different principles on which they operate. Some people will also recognise the allusion to a William Burroughs novel (for whom a soft machine is a human being).
H+: What kind of work have you done with soft machines?
RJ: In my own lab we’ve been working on a number of “soft machine” related problems. At the near-term end, we’ve been trying to understand what makes the molecules go where when you manufacture a solar cell from solutions of organic molecules – the idea here is that if you understand the self-assembly processes you can get a well-defined nanostructure that gives you a high conversion efficiency with a process you can use on a very large scale very cheaply. Further away from applications, we’ve been investigating a new mechanism for propelling micro- and nano-scale particles in water. We use a spatially asymmetric chemical reaction so the particle creates a concentration gradient around itself, as a result of which osmotic pressure pushes it along.
H+: What commercial and/or consumer applications might your research and similar research have?
RJ: There’s still a long way to go, but an obvious goal of the work on propulsion is to make particles that can swim towards a target, using a mechanism analogous to that which bacteria use to swim towards food or away from poisons. Then this could be incorporated in something like a drug delivery device.
H+: Putting aside MNT, what other design approaches would be most likely to yield advanced nanomachines?
RJ: If we are going to use the “soft machines” design paradigm to make functional nano machines, we have two choices. We can co-opt what nature does, modifying biological systems to do what we want. In essence, this is what is underlying the current enthusiasm for synthetic biology. Or we can make synthetic molecules and systems that copy the principles that biology uses, possibly thereby widening the range of environments in which it will work. Top-down methods are still enormously powerful, but they will have limits.
H+: So “synthetic biology” involves the creation of a custom-made microorganism built with the necessary organic parts and DNA to perform a desired function. Even if it is manmade, it only uses recognizable, biological parts in its construction, albeit arranged in ways that don’t occur in nature. But the second approach involving “synthetic molecules and systems that copy the principles that biology uses” is harder to understand. Can you give some clarifying examples?
RJ: If you wanted to make a molecular motor to work in water, you could use the techniques of molecular biology to isolate biological motors from cells, and this approach does work. Alternatively, you could work out the principles by which the biological motor worked – these involve shape changes in the macromolecules coupled to chemical reactions – and try to make a synthetic molecule which would operate on similar principles. This is more difficult than hacking out parts from a biological system, but will ultimately be more flexible and powerful.
H+: Why would it be more flexible and powerful?
RJ: The problem with biological macromolecules is that biology has evolved very effective mechanisms for detecting them and eating them. So although DNA, for example, is a marvellous material for building nanostructures and devices from, its going to be difficult to use these directly in medicine simply because our cells are very good at detecting and destroying foreign DNA. So using synthetic molecules should lead to more robust systems that can be used in a wider range of environments.
H+: In spite of your admiration for nanoscale soft machines, you’ve said that manmade technology has a major advantage because it can make use of electricity in ways living organisms can’t. Will soft machines use electricity in the future somehow?
RJ: Biology uses electrical phenomenon quite a lot – e.g. in our nervous system – but generally this relies on ion transport rather than coherent electron transport. Photosynthesis is an exception, as may be certain electron transporting structures recently discovered in some bacteria. There’s no reason in principle that the principles of self-assembly shouldn’t be used to connect up electronic circuits in which the individual elements are single conducting or semi-conducting molecules. This idea – “molecular electronics” – is quite old now, but it’s probably fair to say that as a field it hasn’t progressed as fast as people had hoped.
Trends in nanotechnology
H+: In your opinion, what are the most important nanotechnology advances that have happened in the last 10 years?
RJ: What’s excited me most personally has been the remarkable advance of DNA nanotechnology. The visionary work of Ned Seeman showed how powerful the idea of programmed self-assembly of DNA strands to produce atomically precise nanostructures could be; the last ten years has seen DNA used to make single molecule motors and machines and to carry out logical operations. These are true synthetic “soft machines”. At the moment these are still laboratory curiosities, but given the way the cost of synthesising DNA is falling this may soon change.
H+: As a person who had spent his life working on nanotechnology, do you think the field is advancing exponentially or linearly?
RJ: It’s not a single technology, so it’s not correct to think of it advancing at a single rate. Some things haven’t gone as fast as people perhaps expected – e.g. single atom manipulation using scanning probe microscopes – while other areas have leapt ahead – DNA-based nanotechnology, for example. But, in reality, most technologies advance in fits and starts. Phases of exponential growth are very natural – it’s natural to set yourself the target of making some fixed fractional improvement every year, and that’s all exponential growth is. But then at some point some physical limit kicks in and improvement plateaus. You see this pattern, for example, in the improvement of steam engine efficiencies in the 19th century – this grew exponentially for some decades, but at some point the 2nd law of thermodynamics exerted itself to cause the growth rate to tail off.
H+: More generally, you’ve cast skepticism on the notion that technology is advancing exponentially or at an ever-growing rate. You said the Human Genome Project–often cited as proof that technology is improving exponentially–was a “bubble” and that innovation in energy and biology is actually slowing down. Do you still believe this? Will things accelerate in the future?
RJ: The idea that the Human Genome Project was a “bubble” originated with the econo-physicist Didier Sornette – he refers to it as a “social bubble”, by which he means that many supporters of the project reinforced each other in their enthusiasm for it, in effect creating an exaggerated general perception of its short-term benefits which attracted resources to the project. Of course “bubbles” have long been associated with new technologies; investors are swept up in enthusiasm for the transformative nature of the technology, over-invest in marginal and oversold projects, and often lose their money. The early days of the railways were full of such episodes, and the dot-com bubble is a more recent one. Technological bubbles may not always be entirely a bad thing – we did end up with railway networks and lots of helpful ICT infrastructure, even if innocent people lost their savings. But the fact that we seem to need bubbles to make progress technologically is actually a sign of our failure to more consciously invest for the long term – as a society, we only seem able to make these investments if we delude ourselves that the returns will arrive quicker than is rationally likely to be the case. But building an economic system on self-delusion seems like a bad idea to me. Our short-termism leads also to a particular type of pathology. Innovation in the realm of information – digital innovation – can be done much more quickly than innovation in the material realm – like nanotechnology – or the biological realm. So if we are biased to favouring innovations that bring returns in the short-term we will end up underinvesting in areas like nanotechnology and nanomedicine, and progress in those areas will slow down.
H+: But don’t advances in information technology enable faster advances in materials and biology? Don’t more powerful computers and computer models speed up research on everything else?
RJ: Yes, indeed they do. Information technology allows us to do much, much more, both in gathering data and in analysing large datasets. On the other hand the science we need to do gets more difficult, because the easy stuff has already been done. But for all the excitement over “big data”, I suspect the limiting factor, the rate-limiting step, for generating really new insights and knowledge remains those scarce resources, human ingenuity and creativity.
H+: Engines of Creation (your favoriteDrexler book) predicted that highly advanced nanomachines would inevitably be created, that they would be programmable with data, that they would enable atom-by-atom construction of things (including copies of themselves), that they would revolutionize manufacturing, medicine, human longevity and food production, and that they could remove manmade pollutants from the atmosphere but also serve as weapons of mass destruction. Do you agree with all of that?
RJ: I don’t think anything in the future of technology is inevitable. Technology isn’t an autonomous force that follows some deterministic linear trajectory, instead as we develop our technologies we find a way through a garden of forking paths. Not every conceivable technology that isn’t contrary to the laws of nature is doable given the constraints that bind us, the technologies that come to pass depend on the history of what we’ve invented already. Above all, a technology may be possible, but someone has still got to make it happen. As Drexler wrote in Engines of Creation, “we need useful dreams to guide our actions”, but we shouldn’t be surprised if things don’t turn out the way we expect.
H+: What types of nanotech-enabled advances will happen in the future?
RJ: Don’t forget that much everyday information technology is already the result of top-down fabrication that is now operating well within the nanoscale. Within twenty years we should have much more sophisticated optoelectronic devices, which combine nanostructured metals, dielectrics and semiconductors to achieve complete control of the interaction of light and electrons. By that time, we should be moving towards the implementation of quantum computers. More mundanely, and perhaps even sooner, I hope we will be able to manufacture efficient solar cells, very cheaply, on a very large scale. On the longer timescale, I would hope that medical nanotechnology would have progressed much further, so we were able to understand and intervene much more purposefully in the molecular level operations of cells. As a result, for example, we might hope that regenerative medicine would be much further advanced.
I’d hope to see much more selective drug delivery, for example cancer therapies that were more effective and with fewer side effects, reliable gene therapy to correct genetic diseases, delivery of small interfering RNA molecules, for example as anti-viral agents, to give just a few examples. In regenerative medicine, I’d hope it would lead to reliable reprogramming of cells and the development of better scaffolds for new tissues.
H+: Over the next 10-20 years, what are your thoughts on Moore’s Law, and how will we build future generations of integrated circuits?
RJ: People have been predicting the end of Moore’s law for some time, and its continuation this long is a powerful lesson about how effective incremental engineering innovation can be, with brilliant innovations like phase shift lithography pushing current top-down methods to sizes that seemed impossible only a few years ago. So one should be careful about calling an end to this run of success. That said, Moore’s law will certainly come to an end sooner or later, quite possibly in the next ten years. My own suspicion is that what will kill Moore’s law won’t be physics, but economics. As the cost of a single next-generation fab
s runs into the tens of billions of dollars, at some point that’s going to be indigestible for the companies and shareholders that will have to invest in them. This will probably be a hiatus, rather than a complete end to the growth of computing power, though – at some point we will work out how to implement quantum computing in a scalable way.
H+: Gray Goo doomsday scenario: Realistic? Could it ever happen?
RJ: It must surely be possible in principle to make self-replicating, adaptive devices that takes matter and energy from the environment to feed its growth and reproduction; that’s what bacteria are. To realise a “Gray Goo” scenario the artificial replicators would have to out-compete the bacterial ecosystem; this will be a tall order given bacteria’s adaptability and effectiveness. But, in any case, this remains a very distant danger.
H+: So what should we fear about nanotechnology?
RJ: There are things to worry about with the development of nanotechnology, for example with the potential toxicity of nanoscale materials and the societal problems from unequal access to technology. But rather than worrying about runaway technology, my biggest fear now is the opposite – that we won’t devote enough resources to get the innovation we need. In this case we’ll end up devoting more of society’s efforts to just getting by, with a worsening environment and with depleted resources. Of course we also need to recognise that some of the downsides of nanotechnology that people have talked about have already arrived – mass surveillance is here already.
H+: But unequal access to technology has always existed: In the early 90’s, only rich people had cell phones. Ten years later, even many poor people had them. The same could be said for new medicines that are at first expensive and then cheap once their patents expire. Why should unequal access to technology be any worse a problem in the future than it is now?
RJ: Whether we have wide access to new technologies or whether that access is restricted to a privileged few is for us collectively to choose, as it’s a matter of politics rather than technology. But history seems to suggest that having more advanced technologies makes societies less, not more, equal. Access to new technologies gives access to power, and people don’t seem very good at sharing power.
H+: What do you think of the label “nanotechnology”? Is it a valid field? What do people most commonly misunderstand about it?
RJ: Nanotechnology, as the term is used in academia and industry, isn’t really a field in the sense that supramolecular chemistry or surface physics are fields. It’s more of a socio-political project, which aims to do to physical scientists what the biotech industry did to life scientists – that is, to switch their focus from understanding nature to intervening in nature by making gizmos and gadgets, and then to try and make money from
What I’ve found, doing quite a lot of work in public engagement around nanotechnology, is that most people don’t have enough awareness of nanotechnology to misunderstand it at all. Among those who do know something about it, I think the commonest misunderstanding is the belief that it will progress much more rapidly than is actually possible. It’s a physical technology, not a digital one, so it won’t proceed at the pace we see in digital technologies. As all laboratory-based nanotechnologists know, the physical world is more cussed than the digital one, and the smaller it gets the more cussed it seems to be…
H+: How come no one ever talks about building better micromachines? Why is the focus on nanomachines? After all, micromachines would still be small enough to, say, crawl through blood vessels.
RJ: That’s a good question – microsurgery is already clinically important now, and there will be further improvements as robot microsurgery becomes more advanced. But the fundamental argument for nanomedicine is this – we know that the fundamental processes of cell biology take place at the nanoscale, so if we are going to achieve the goal of intervening in biology’s most fundamental processes, this will need to be done at the nanoscale – at the level of individual biological molecules.
H+: Your thoughts on picotechnology and femtotechnology?
RJ: There’s a roughly inverse relationship between the energy scales needed to manipulate matter and the distance scale at which that manipulation takes place. Manipulating matter at the picometer scale is essentially a matter of controlling electron energy levels in atoms, which involves electron volt energies. This is something we’ve got quite good at when we make lasers, for example. Things are more difficult when we go smaller. To manipulate matter at the nuclear level – i.e. on femtometer length scales – needs MeV energies, while to manipulate matter at the level of the constituents of hadrons – quarks and gluons – we need GeV energies. At the moment our technology for manipulating objects at these energy scales is essentially restricted to hurling things at them, which is the business of particle accelerators. So at the moment we really have no idea how to do femtotechnology of any kind of complexity, nor do we have any idea whether whether there is anything interesting we could do with it if we could. I suppose the question is whether there is any scope for complexity within nuclear matter. Perhaps if we were the sorts of beings that lived inside a neutron star or a quark-gluon plasma we’d know.
H+: In what ways should science policy and the way science is practiced be changed?
RJ: I’m most familiar with science policy in the UK, but I think the UK exhibits some of the difficulties that the USA has, only in a more extreme form. In the last thirty years or so our governments have focused on a “supply-side” science policy – it’s been assumed that if one supports basic science and ensures a supply of skilled and trained manpower this will automatically translate into technological innovation. My concern is that we don’t currently attend to the “demand side” of innovation. In the past we had many laboratories (in both the private sector and the public sector) that connected basic science to the people who could convert technological innovations into new processes and products – one thinks in the USA of great institutions like Bell Laboratories. These applied technology labs have been run down or liquidated, and their place has not been fully taken by the new world of spin-outs and venture capital backed start-ups, whose time horizons are too short to develop truly radical innovations in the material and biological realms. So we need to do something to fill that gap.
H+: What do you think of the transhumanist and Singularity movements?
RJ: These are terms that aren’t always used with clearly understood meanings, by me at least. If by Transhumanism, we are referring to the systematic use of technology to better the lot of humanity, then I’m all in favour. After all, the modern Western scientific project began with Francis Bacon, who said its purpose was “an improvement in man’s estate and an enlargement of his power over nature”. And if the essence of Singularitarianism is to say that there’s something radically unknowable about the future, then I’m strongly in agreement. On the other hand, if we consider Transhumanism and Singularitarianism as part of a belief package promising transcendence through technology, with a belief in a forthcoming era of material abundance, superhuman wisdom and everlasting life, then it’s interesting as a cultural phenomenon. In this sense it has deep roots in the eschatologies of the apocalyptic traditions of Christianity and Judaism. These were secularised by Marx and Trotsky, and technologised through, on the one hand, Fyodorov, Tsiolkovsky and the early Russian ideologues of space exploration, and on the other by the British Marxist scientists J.B.S. Haldane and Desmond Bernal. Of course, the fact that a set of beliefs has a colourful past doesn’t mean they are necessarily wrong, but we should be aware that the deep tendency of humans to predict that their wishes will imminently be fulfilled is a powerful cognitive bias.