Not long ago nanotechnology was a fringe topic; now it’s a flourishing engineering field, and fairly mainstream. For example, while writing this article, I happened to receive an email advertisement for the “Second World Conference on Nanomedicine and Drug Delivery,” in Kerala, India. It wasn’t so long ago that nanomedicine seemed merely a flicker in the eyes of Robert Freitas and a few other visionaries!
But nano is not as small as the world goes. A nanometer is 10−9 meters – the scale of atoms and molecules. A water molecule is a bit less than one nanometer long, and a germ is around a thousand nanometers across. On the other hand, a proton has a diameter of a couple femtometers – where a femtometer, at 10−15 meters, makes a nanometer seem positively gargantuan. Now that the viability of nanotech is widely accepted (in spite of some ongoing heated debates about the details), it’s time to ask: what about femtotech? Picotech or other technologies at the scales between nano and femto seem relatively uninteresting, because we don’t know any basic constituents of matter that exist at those scales. But femtotech, based on engineering structures from subatomic particles, makes perfect conceptual sense, though it’s certainly difficult given current technology.
The nanotech field was arguably launched by Richard Feynman’s 1959 talk “There’s Plenty of Room at the Bottom.” As Feynman wrote there,
"It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction.
Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin? "
Note: the diagrams in the background of this picture were added much later in order to market Feynman’s lecture – Feynman articulated the core nanotech vision but he didn’t use the word “nanotech” nor depict details of this nature.
The next big step toward nanotech was Eric Drexler’s classic 1992 book Nanosystems, which laid out conceptual designs for a host of nanomachines, including nanocomputer switches, general-purpose molecular assemblers, and an amazing variety of other fun stuff. Drexler’s 1987 book Engines of Creation also played a large role, bringing the notion of nanotech to the masses. Contemporary nanotech mostly focuses on narrower nano-engineering than what Drexler envisioned, but arguably it’s building tools and understanding that will ultimately be useful for realizing Feynman’s and Drexler’s vision. For instance, a lot of work is now going into the manufacture and utilization of carbon nanotubes, which have a variety of applications, from the relatively mundane (e.g. super-strong fabrics and fibers) to potential roles as components of more transformative nanosystems like nanocomputers or molecular assemblers. And there are also a few labs such as Zyvex that are currently working directly in a Drexlerian direction.
But Feynman’s original vision, while it was focused on the nano-scale, wasn’t restricted to this level. There’s plenty of room at the bottom, as he said – and the nano-scale is not the bottom! Theres’s plenty more room down there to explore.
One might argue that, since practical nanotech is still at such an early stage, it’s not quite the time to be thinking about femtotech. But technology is advancing faster and faster each year, so it makes sense to think a bit further ahead than contemporary hands-on engineering efforts. My friend and colleague Hugo de Garis has been talking to me about femtotech for a while, and has touched on the topic in various lectures and interviews; he convinced me that the topic is worth looking at in spite of our current lack of knowledge regarding its practical realization. After all, when Feynman gave his “Plenty of Room at the Bottom” lecture, nanotech also appeared radically pie-in-the-sky. Hugo’s personal take on femtotech is presented in an essay he wrote recently, which is presented here as a companion piece to this article; the two articles are intended to be read together.
There are many possible routes to femtotech, and Hugo notes a number of them in his article, including some topics I won’t touch here at all like micro black holes and Bose-Einstein condensation of squarks. I’ll focus here largely on a particular class of approaches to femtotech based on the engineering of stable degenerate matter – not because I think this is the only interesting way to think about femtotech, but merely because one has to choose some definite direction to explore if one wants to go into any detail at all.
Physics at the Femto Scale
To understand the issues involved in creating femtotech, you’ll first need to recall a few basics about particle physics.
In the picture painted by contemporary physics, everyday objects like houses and people and water are made of molecules, which are made of atoms, which in turn are made of subatomic particles. There are also various subatomic particles that don’t form parts of atoms (such as photons, the particles of light, and many others). The behavior of these particles is extremely weird by the standards of everyday life – with phenomena like non-local correlations between distant phenomena, observer-dependence of reality, quantum teleportation and lots of other good stuff. But I won’t take time here to review quantum mechanics and its associated peculiarities, just to run through a few facts about subatomic particles needed to explain how femtotech might come about.
Subatomic particles fall into two categories: fermions and bosons. These two categories each contain pretty diverse sets of particles, but they’re grouped together because they also have some important commonalities.
The particles that serve as the building blocks of matter are all fermions. Atoms are made of protons, neutrons and electrons. Electrons are fermions, and so are quarks, which combine to build protons and neutrons. Quarks appear to occur in nature only in groups, most commonly groups of 2 or 3. A proton contains two up quarks and one down quark, while a neutron consists of one up quark and two down quarks; the quarks are held together in the nucleus by other particles called gluons. Mesons consist of 2 quarks – a quark and an anti-quark. There are six basic types of quark, beguilingly named Up, Down, Bottom, Top, Strange, and Charm. Out of the four forces currently recognized in the universe – electromagnetism, gravity and weak and strong nuclear forces – quarks are most closely associated with the strong nuclear force, which controls most of their dynamics. But quarks also have some interaction with the weak force, e.g. the weak force can cause the transmutation of quarks into different quarks, a phenomenon that underlies some kinds of radioactive decay such as beta decay.
On the other hand, bosons are also important – for example photons, the particle-physics version of light, are bosons. Gravitons, the gravity particles proposed by certain theories of gravitation, would also be bosons.
The nucleus of an atom contains protons and neutrons. The electrons are arranged in multiple shells around the nucleus, due to the Pauli exclusion principle. Also note this sort of “solar system” model of particles as objects orbiting other objects is just a heuristic approximation; there are many other complexities and a more accurate view would depict each particle as a special sort of wave function.
The carbon atom, whose electrons are distributed across two shells.
Finally, just one more piece of background knowledge before we move on to femtotech. Fermions, unlike bosons, obey the Pauli exclusion principle, which says that no two identical fermions can occupy the same state at the same time. For example, each electron in an atom is characterized by a unique set of quantum numbers (the principle quantum number which gives its energy level, the magnetic quantum number which gives the direction of orbital angular momentum, and the spin quantum number which gives the direction of its spin). If not for the Pauli exclusion principle, all of the electrons in an atom would pile up in the lowest energy state (the K shell, the innermost shell of electrons orbiting the nucleus of the atom). But the exclusion principle implies that the different electrons must have different quantum states, which results in some of the electrons getting forced to have different positions, leading to the formation of additional shells (in atoms with sufficient electrons).
Degenerate Matter as a Possible Substrate for Femtotech
One can view the Pauli exclusion principle as exerting a sort of “pressure” on matter, which in some cases serves to push particles apart. In ordinary matter this Pauli pressure is minimal compared to other forces. But there is also degenerate matter – matter which is so extremely dense that this Pauli pressure or “degeneracy pressure”, preventing the constituent particles from occupying identical quantum states, plays a major role. In this situation, pushing two particles close together means that they have effectively identical positions, which means that in order to obey the Pauli exclusion principle, they need to have different energy levels, creating a lot of additional compression force, and causing some very odd states of matter to arise.
For instance, in ordinary matter, temperature is correlated to speed of molecular motion. Heat implies faster motion, and cooling something down makes its component molecules move more slowly. But in degenerate matter, this need not be the case. If one repeatedly cools and compresses a plasma, eventually one reaches a state where it’s not possible to compress the plasma any further, because of the exclusion principle that won’t let us put two particles in the same state (including the same place). In this kind of super-compressed plasma, the position of a particle is rather precisely defined – but according to a key principle of quantum theory, Heisenberg’s uncertainty principle, you can’t have accurate knowledge of both the position and the momentum (movement) of a particle at the same time. So the particles in a super-compressed plasma must therefore have highly uncertain momentum – i.e. in effect, they’re moving around a lot, even though they may still be very cold. This is just one example of how degenerate matter can violate our usual understanding of how materials work.
Internal structure of a neutron star, according to current theories. Note how the particles are distributed very differently than in an atom – because this is “degenerate” matter.
At the present time, degenerate matter is mostly discussed in astrophysics, in the context of neutron stars, white dwarf stars, and so forth. It has also been popular in science fiction – for example, in the Star Trek universe, neutronium (matter formed only from large numbers of neutrons, stable at ordinary gravities) is an extremely hard and durable substance, often used as armor, which conventional weapons cannot penetrate or dent at all. But so far neutronium has never been seen in reality. “Strange matter” – defined as matter consisting of an equal number of up, down and strange quarks – is another kind of degenerate matter, with potential applications to femtotech, which I’ll return to a little later.
As a substrate for femtotech, degenerate matter appears to have profound potential. It serves as an existence proof that, yes, one can build stuff other than atoms and molecules with subatomic particles. On the other hand, there is the problematic fact that all the currently known examples of degenerate matter exist at extremely high gravities, and derive their stability from this extreme gravitational force. Nobody knows, right now, how to make degenerate matter that remains stable at Earth-level gravities or anywhere near. However, neither has anybody shown that this type of degenerate matter is an impossibility according to our currently assumed physical laws. It remains a very interesting open question.
Bolonkin’s Fantastic Femtotech Designs
If you type “femtotech” into a search engine, you’ll likely come up with a 2009 paper by A.A. Bolonkin, a former Soviet physicist now living in Brooklyn, entitled “Femtotechnology: Nuclear Matter with Fantastic Properties”. Equations and calculations notwithstanding, this is an explicitly speculative paper – but the vision it presents is intriguing.
Bolonkin describes a new (and yet unobserved) type of matter he calls “AB-matter”, defined as matter which exists at ordinary Earth-like gravities, yet whose dynamics are largely guided by Pauli exclusion principle based degeneracy pressure. He explores the potential of creating threads, bars, rods, tubes, nets and so forth using AB-matter. He argues that
“this new ‘AB-Matter’ has extraordinary properties (for example, tensile strength, stiffness, hardness, critical temperature, superconductivity, supertransparency and zero friction), which are up to millions of times better than corresponding properties of conventional molecular matter. He shows concepts of design for aircraft, ships, transportation, thermonuclear reactors, constructions and so on from nuclear matter. These vehicles will have unbelievable possibilities (e.g., invisibility, ghost-like penetration through any walls and armor, protection from nuclear bomb explosions and any radiation flux).”
All this sounds exciting indeed! And the parallels between Bolonkin’s diagrams and Drexler’s diagrams in Nanosystems are obvious. But nowhere in Bolonkin’s fascinating thought-piece does he address the million-dollar question of how and why he thinks such structures could be made stable.
I discussed this with Steve Omohundro, a fellow AI researcher and futurist thinker who started his career as a physicist, and Steve very articulately expressed the same “common sense nuclear physics” worries I experienced on reading Bolonkin’s paper:
"A standard model for a nucleus is the "liquid drop" model and it gives pretty good predictions. Basically it treats the nucleus as a liquid with a pretty high surface tension. The nucleons in the center are energetically very happy because they are surrounded by other nucleons attracted by the strong interaction. The nucleons on the surface are not so energetically happy because they interact with fewer other nucleons than they might otherwise. This creates a high effective "surface tension" for the nuclear liquid. That’s what makes nuclei want to be spherical. And when they get too big they become unstable because the surface area is relatively larger and electrostatic repulsion overcomes the nuclear attraction.
All of Bolonkin’s proposed femtostructures seem unstable to me. His femto rods or whiskers are like streams of water which are subject to instabilities that cause them to break into a sequence of droplets. Imagine one of his rods periodically squeezing inward and outward keeping the volume fixed. If the surface area is decreased the perturbation will be increased and eventually break the rod into droplets.
Even if they weren’t subject to that instability, there would be tremendous tensile force trying to pull the two ends of a rod together and turning it into a ball (which has a smaller surface area than the same volume cylinder). I didn’t see any suggestions for what he wants to use to counteract that tensile force."
Like me, Steve has a tendency to be open-minded about wild-sounding future possibilities. But open-mindedness must be tempered with a bit of realism. I’m hoping for a sequel from Bolonkin containing at least back-of-the-envelope stability arguments.
Might Dynamic Stabilization Work on Degenerate Matter?
And in spite of his initial skeptical reaction, after a little more thought Steve had a rather interesting brainstorm:
I just had a thought about how to stabilize degenerate femtomatter: use dynamic stabilization. The classic example is the shaking inverted pendulum. An upside down pendulum is unstable, falling either left or right if perturbed. But if you shake the base at a sufficiently high frequency, it adds a "pondermotive" pseudopotential which stabilizes the unstable fixed point. Here’s a video of a guy who built one:
The same approach can stabilize fluid instabilities. If you turn a cup of fluid upside down, the perfectly flat surface is an unstable equilibrium. The Rayleigh-Taylor instability causes ripples to grow and the fluid to spill out. But, I remember seeing a guy years ago who put a cup of oil in a shaking apparatus and was able to turn it upside down without it spilling. So the oscillations were able to stabilize all the fluid modes at once. I wonder if something similar might be used to stabilize degenerate matter at the femto scale?
A fascinating idea indeed! Instead of massive gravity or massive heat, perhaps one could use incredibly fast, low-amplitude vibrations to stabilize degenerate matter. How to vibrate subatomic particles that fast is a whole other matter, and surely a difficult engineering problem – but still, this seems a quite promising avenue. It would be interesting to do some mathematics regarding the potential dynamic stabilization of various configurations of subatomic particles subjected to appropriate vibrations.
An inverted pendulum kept vertical via dynamic stabilization. The rod would rotate and fall down to one side or another if it weren’t vibrating. But if it’s vibrated very fast with low amplitude, it will remain upright due to dynamic stabilization. Conceivably a similar phenomenon could be used to make stable degenerate matter, using very fast femtoscale vibrations.
Of course, such metaphorical ideas must be taken with a grain of salt. When I think about the “liquid drop” model of the nucleus, I’m somewhat reminded of how the genius inventor Nikola Tesla intuitively modeled electricity as a fluid. This got him a long way compared to his contemporaries, leading him to develop AC power and ball lightning generators and all sorts of other amazing stuff – yet it also led to some mistakes, and caused him to miss some things that are implicit in the mathematics of electromagnetism but not in the intuitive metaphorical "electricity as fluid" model. For instance, Tesla’s approach to wireless power transmission was clearly misguided in some respects (even if it did contain some insights that haven’t yet been fully appreciated), and this may have been largely because of the limitations of his preferred fluid-dynamics metaphor for electricity. Where degenerate matter is concerned, metaphors to liquid drops and macroscopic shaking apparatuses may be very helpful for inspiring additional experiments, but eventually we can expect rigorous theory to far outgrow them.
The bottom line is, in the current state of physics, nobody can analytically solve the equations of nuclear physics except in special simplified cases. Physicists often rely on large-scale computer simulations to solve the equations in additional cases – but these depend on various technical simplifying assumptions, which are sometimes tuned based on conceptual assumptions about how the physics works. Intuitive models like “nucleus as water droplet” are based on the limited set of cases in which we’ve explored the solutions of the relevant equations using analytical calculations or computer simulations. So, based on the current state of the physics literature, we really don’t know if it’s possible to build stable structures of the sort Bolonkin envisions. But there are surely worthwhile avenues to explore, including Steve’s intriguing suggestion.
Gell-Mann Gives Femtotech A Definite Maybe
A few weeks ago, while at an event in San Francisco, I was thrilled to have the opportunity to discuss femtotech with Murray Gell-Mann – who is not only a Nobel Prize winning physicist, but also one of the world’s ultimate gurus on quarks, since he invented and named the concept and worked out a lot of the theory of their behavior. I knew my friend Hugo de Garis had briefly discussed femtotech with Gell-Mann a decade and a half previously, but that he hadn’t expressed any particular thoughts on the topic. I was curious if Gell-Mann’s views on the topic had perhaps progressed a bit.
A quark-gluon plasma formed at the collision point of two relativistically accelerated gold ions in the center of the STAR detector at the relativistic heavy ion collider at the Brookhaven national laboratory.
To my mild disappointment, Gell-Mann’s first statement to me about femtotech was that he had never thought about the topic seriously. However he went on to say that it seemed to be a reasonable idea to pursue. As a mathematician and AI guy dabbling in physics, this was a relief to me – at least the great physicist didn’t laugh at me!!
When I probed Gell-Mann about degenerate matter, he spent a while musing about the possible varieties of degenerate matter in which the ordinary notion of quark confinement is weakened. “Confinement” is the property that says quarks cannot be isolated singularly, and therefore cannot be directly observed, but can only be observed as parts of other particles like protons and neutrons. At first it was thought that quarks can only be observed in triplets, but more recent research suggests the possibility of “weak confinement” that lets you observe various aspects of individual quarks in an isolated way. Quark-gluon plasmas, which have been created in particle accelerators using very high temperatures (like, 4 trillion degrees!), are one much-discussed way of producing “almost unconfined” quarks. But Gell-Mann felt the possibilities go far beyond quark-gluon plasmas. He said he thought it possible that large groups of quarks could potentially be weakly confined in more complex ways that nobody now understands.
So after some fun discussion in this vein, I pressed Gell-Mann specifically on whether understanding these alternative forms of weak multi-quark confinement might be one way to figure out how to build stable degenerate matter at Earth gravity.
His answer was, basically, definitely maybe.
Then we changed the topic to AI and the singularity, where I’m on firmer ground – and there he was a little more positive, actually. He said he thought it was crazy to try to place a precise date on the singularity, or to estimate anything in detail about it in advance… but he was sympathetic to the notion of accelerating technological change, and very open to the idea that massively more change is on the way. And, contrary to his fellow physicist Roger Penrose, he expressed doubt that quantum computing (let alone femtocomputing) would be necessary for achieving human-level AI. Even if the human brain somehow uses strange quantum effects in some particulars, he felt, digital computers should most likely be enough to achieve human-level intelligence.
A few moments later at the same event, I asked a young Caltech physics postdoc the same questions about degenerate matter and femtotech – and he gave a similar answer, only mildly more negative in tone. He said it seemed somewhat unlikely that one could make room-temperature stable structures using degenerate matter, but that he couldn’t think of any strong reason why it would be impossible.
Currently, it seems, where degenerate matter based femtotech is concerned, nobody knows…
Strange Matter and Other Strange Matters
Gell-Mann’s comments reminded me of strangelets – strange hypothetical constructs I first found out about last year when reading about some strange people who had the strange idea that the Large Hadron Collider might destroy the world by unleashing a strange chain reaction turning the Earth into strangelets. Fortunately this didn’t happen – and it seems at least plausible that strangelets might pose a route to stable degenerate matter of a form useful for femtotech.
A strangelet is (or would be, if they exist at all, which is unknown) an entity consisting of roughly equal numbers of up, down and strange quarks. A small strangelet would be a few femtometers across, with around the mass of a light nucleus. A large strangelet could be meters across or more, and would then be called a “strange star” or a “quark star.”
In a (hypothetical) strange star, quarks are not confined in the traditional sense,
but may still be thought of “weakly confined” in some sense (at least that’s Gell-Mann’s view)
So far, all the known particles with strange quarks – like the lambda particle – are unstable. But there’s no reason to believe that states with a larger number of quarks would have to suffer from this instability. According to Bodmer and Witten’s “strange matter hypothesis,” if enough quarks are collected together, you may find that the lowest energy state of the collective is a strangelet, i.e. a state in which up, down, and strange quarks are roughly equal in number.
So where does the End of the World come in? There are some interesting (albeit somewhat speculative) arguments to the effect that if a strangelet encounters ordinary matter, it could trigger a chain reaction in which the ordinary matter gets turned into strangelets, atom by atom at an accelerating pace. Once one strangelet hits a nucleus, it would likely turn it into strange matter, thus producing a larger and more stable strangelet, which would in turn hit another nucleus, etc. Goodbye Earth, hello huge hot ball of strange matter. This was the source of the worries about the LHC, which did not eventuate since when the LHC was utilized no strangelets were noticeably produced.
One of the many unknowns about strangelets is their surface tension – nobody knows how to calculate this, at present. If the surface tension is strong enough, large stable strangelets should be possible – and potentially, strangelets with complex structure as femtotech requires.
And of course, nobody knows what happens if you vibrate strangelets very very fast with small amplitude – can you produce stable strangelets via dynamic stabilization? Could this be a path to viable femtotechnology, even if stable strangelets don’t occur in nature? After all, carbon nanotubes appear not to occur in nature either.
The Future of Femtotech
So what’s the bottom line – is there still more room at the bottom?
Nanotech is difficult engineering based on mostly known physics. Femtotech, on the other hand, pushes at the boundaries of known physics. When exploring possible routes to femtotech, one quickly runs up against cases where physicists just don’t know the answer.
Degenerate matter of one form or another seems a promising potential route to femtotech. Bolonkin’s speculations are intriguing, as are the possibilities of strangelets or novel weakly confined multi-quark systems. But the issue of stability is a serious one; nobody yet knows whether large strangelets can be made stable, or whether degenerate matter can be created at normal gravities, nor whether weakly confined quarks can be observed at normal temperatures, etc. Even where the relevant physics equations are believed known, the calculations are too hard to do given our present analytical and computational tools. And in some cases, e.g. strangelets, we run into situations where different physics theories held by respected physicists probably yield different answers.
Putting my AI futurist hat on for a moment, I’m struck by what a wonderful example we have here of the potential for an only slightly superhuman AI to blast way past humanity in science and engineering. The human race seems on the verge of understanding particle physics well enough to analyze possible routes to femtotech. If a slightly superhuman AI, with a talent for physics, were to make a few small breakthroughs in computational physics, then it might (for instance) figure out how to make stable structures from degenerate matter at Earth gravity. Bolonkin-style femtostructures might then become plausible, resulting in femtocomputing – and the slightly superhuman AI would then have a computational infrastructure capable of supporting massively superhuman AI. Can you say “singularity”? Of course, femtotech may be totally unnecessary in order for a Vingean singularity to occur (in fact I strongly suspect so). But be that as it may, it’s interesting to think about just how much practical technological innovation might ensue from a relatively minor improvement in our understanding of fundamental physics.
Is it worth thinking about femtotech now, when the topic is wrapped up with so much unresolved physics? I think it is, if for no other reason than to give the physicists a nudge in certain directions that might otherwise be neglected. Most particle physics work – even experimental work with particle accelerators – seems to be motivated mainly by abstract theoretical interest. And there’s nothing wrong with this – understanding the world is a laudable aim in itself; and furthermore, over the course of history, scientists aiming to understand the world have spawned an awful lot of practically useful by-products. But it’s interesting to realize that there are potentially huge practical implications waiting in the wings, once particle physics advances a little more – if it advances in the right directions.
So, hey, all you particle physicists and physics funding agency program managers reading this article (and grumbling at my oversimplifications; sorry, this is tough stuff to write about for a nontechnical audience!), please take note – why not focus some attention on exploring the possibility of complexly structured degenerate matter under Earthly conditions, and other possibly femtotech-related phenomena such as those mentioned in Hugo de Garis’s companion essay?
Is there still plenty more room at the bottom, after the nanoscale is fully explored? It seems quite possibly so – but we need to understand what goes on way down there a bit better before we can build stuff at the femtoscale. Fortunately, given the exponentially accelerating progress we’re seeing in some relevant areas of technology, the wait for this understanding and the ensuing technologies may not be all that long.