H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Ben Goertzel
January 10, 2011


Not long ago nanotechnology was a fringe topic; now it's a flourishing engineering field, and fairly mainstream.  For example, while writing this article, I happened to receive an email advertisement for the “Second World Conference on Nanomedicine and Drug Delivery,” in Kerala, India. It wasn't so long ago that nanomedicine seemed merely a flicker in the eyes of Robert Freitas and a few other visionaries!

But nano is not as small as the world goes.  A nanometer is 10−9 meters – the scale of atoms and molecules.  A water molecule is a bit less than one nanometer long, and a germ is around a thousand nanometers across.  On the other hand, a proton has a diameter of a couple femtometers – where a femtometer, at 10−15 meters, makes a nanometer seem positively gargantuan.  Now that the viability of nanotech is widely accepted (in spite of some ongoing heated debates about the details), it's time to ask: what about femtotech?  Picotech or other technologies at the scales between nano and femto seem relatively uninteresting, because we don't know any basic constituents of matter that exist at those scales.  But femtotech, based on engineering structures from subatomic particles, makes perfect conceptual sense, though it's certainly difficult given current technology.

The nanotech field was arguably launched by Richard Feynman's 1959 talk “There's Plenty of Room at the Bottom.”  As Feynman wrote there,

"It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction.

Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin? "



Note: the diagrams in the background of this picture were added much later in order to market Feynman's lecture – Feynman articulated the core nanotech vision but he didn't use the word “nanotech” nor depict details of this nature.

The next big step toward nanotech was Eric Drexler's classic 1992 book Nanosystems, which laid out conceptual designs for a host of nanomachines, including nanocomputer switches, general-purpose molecular assemblers, and an amazing variety of other fun stuff.  Drexler's 1987 book Engines of Creation also played a large role, bringing the notion of nanotech to the masses.  Contemporary nanotech mostly focuses on narrower nano-engineering than what Drexler envisioned, but arguably it's building tools and understanding that will ultimately be useful for realizing Feynman's and Drexler's vision.  For instance, a lot of work is now going into the manufacture and utilization of carbon nanotubes, which have a variety of applications, from the relatively mundane (e.g. super-strong fabrics and fibers) to potential roles as components of more transformative nanosystems like nanocomputers or molecular assemblers.  And there are also a few labs such as Zyvex that are currently working directly in a Drexlerian direction.

But Feynman's original vision, while it was focused on the nano-scale, wasn't restricted to this level.  There's plenty of room at the bottom, as he said – and the nano-scale is not the bottom!  Theres's plenty more room down there to explore.

One might argue that, since practical nanotech is still at such an early stage, it's not quite the time to be thinking about femtotech.  But technology is advancing faster and faster each year, so it makes sense to think a bit further ahead than contemporary hands-on engineering efforts.  My friend and colleague Hugo de Garis has been talking to me about femtotech for a while, and has touched on the topic in various lectures and interviews; he convinced me that the topic is worth looking at in spite of our current lack of knowledge regarding its practical realization.  After all, when Feynman gave his “Plenty of Room at the Bottom” lecture, nanotech also appeared radically pie-in-the-sky.  Hugo's personal take on femtotech is presented in an essay he wrote recently, which is presented here as a companion piece to this article; the two articles are intended to be read together.

There are many possible routes to femtotech, and Hugo notes a number of them in his article, including some topics I won't touch here at all like micro black holes and Bose-Einstein condensation of squarks.   I'll focus here largely on a particular class of approaches to femtotech based on the engineering of stable degenerate matter – not because I think this is the only interesting way to think about femtotech, but  merely because one has to choose some definite direction to explore if one wants to go into any detail at all.

Physics at the Femto Scale

To understand the issues involved in creating femtotech, you'll first need to recall a few basics about particle physics.

In the picture painted by contemporary physics, everyday objects like houses and people and water are made of molecules, which are made of atoms, which in turn are made of subatomic particles.  There are also various subatomic particles that don't form parts of atoms (such as photons, the particles of light, and many others).  The behavior of these particles is extremely weird by the standards of everyday life – with phenomena like non-local correlations between distant phenomena, observer-dependence of reality, quantum teleportation and lots of other good stuff.  But I won't take time here to review quantum mechanics and its associated peculiarities, just to run through a few facts about subatomic particles needed to explain how femtotech might come about.

Subatomic particles fall into two categories: fermions and bosons.  These two categories each contain pretty diverse sets of particles, but they're grouped together because they also have some important commonalities.

The particles that serve as the building blocks of matter are all fermions.  Atoms are made of protons, neutrons and electrons.  Electrons are fermions, and so are quarks, which combine to build protons and neutrons.  Quarks appear to occur in nature only in groups, most commonly groups of 2 or 3.  A proton contains two up quarks and one down quark, while a neutron consists of one up quark and two down quarks; the quarks are held together in the nucleus by other particles called gluons.  Mesons consist of 2 quarks – a quark and an anti-quark.  There are six basic types of quark, beguilingly named Up, Down, Bottom, Top, Strange, and Charm.  Out of the four forces currently recognized in the universe – electromagnetism, gravity and weak and strong nuclear forces – quarks are most closely associated with the strong nuclear force, which controls most of their dynamics.  But quarks also have some interaction with the weak force, e.g. the weak force can cause the transmutation of quarks into different quarks, a phenomenon that underlies some kinds of radioactive decay such as beta decay.

On the other hand, bosons are also important – for example photons, the particle-physics version of light, are bosons.  Gravitons, the gravity particles proposed by certain theories of gravitation, would also be bosons.

The nucleus of an atom contains protons and neutrons.  The electrons are arranged in multiple shells around the nucleus, due to the Pauli exclusion principle.  Also note this sort of “solar system” model of particles as objects orbiting other objects is just a heuristic approximation; there are many other complexities and a more accurate view would depict each particle as a special sort of wave function.

The carbon atom, whose electrons are distributed across two shells.

Finally, just one more piece of background knowledge before we move on to femtotech.  Fermions, unlike bosons, obey the Pauli exclusion principle, which says that no two identical fermions can occupy the same state at the same time.  For example, each electron in an atom is characterized by a unique set of quantum numbers (the principle quantum number which gives its energy level, the magnetic quantum number which gives the direction of orbital angular momentum, and the spin quantum number which gives the direction of its spin).  If not for the Pauli exclusion principle, all of the electrons in an atom would pile up in the lowest energy state (the K shell, the innermost shell of electrons orbiting the nucleus of the atom).  But the exclusion principle implies that the different electrons must have different quantum states, which results in some of the electrons getting forced to have different positions, leading to the formation of additional shells (in atoms with sufficient electrons).

Degenerate Matter as a Possible Substrate for Femtotech

One can view the Pauli exclusion principle as exerting a sort of “pressure” on matter, which in some cases serves to push particles apart.  In ordinary matter this Pauli pressure is minimal compared to other forces.  But there is also degenerate matter – matter which is so extremely dense that this Pauli pressure or “degeneracy pressure”, preventing the constituent particles from occupying identical quantum states, plays a major role.  In this situation, pushing two particles close together means that they have effectively identical positions, which means that in order to obey the Pauli exclusion principle, they need to have different energy levels, creating a lot of additional compression force, and causing some very odd states of matter to arise.

For instance, in ordinary matter, temperature is correlated to speed of molecular motion.  Heat implies faster motion, and cooling something down makes its component molecules move more slowly.  But in degenerate matter, this need not be the case.  If one repeatedly cools and compresses a plasma, eventually one reaches a state where it's not possible to compress the plasma any further, because of the exclusion principle that won't let us put two particles in the same state (including the same place).  In this kind of super-compressed plasma, the position of a particle is rather precisely defined – but according to a key principle of quantum theory, Heisenberg's uncertainty principle, you can't have accurate knowledge of both the position and the momentum (movement) of a particle at the same time.  So the particles in a super-compressed plasma must therefore have highly uncertain momentum – i.e. in effect, they're moving around a lot, even though they may still be very cold.  This is just one example of how degenerate matter can violate our usual understanding of how materials work.

Internal structure of a neutron star, according to current theories.  Note how the particles are distributed very differently than in an atom – because this is “degenerate” matter.

At the present time, degenerate matter is mostly discussed in astrophysics, in the context of neutron stars, white dwarf stars, and so forth.  It has also been popular in science fiction – for example, in the Star Trek universe, neutronium (matter formed only from large numbers of neutrons, stable at ordinary gravities) is an extremely hard and durable substance, often used as armor, which conventional weapons cannot penetrate or dent at all.  But so far neutronium has never been seen in reality.  “Strange matter” – defined as matter consisting of an equal number of up, down and strange quarks – is another kind of degenerate matter, with potential applications to femtotech, which I'll return to a little later.

As a substrate for femtotech, degenerate matter appears to have profound potential.  It serves as an existence proof that, yes, one can build stuff other than atoms and molecules with subatomic particles.   On the other hand, there is the problematic fact that all the currently known examples of degenerate matter exist at extremely high gravities, and derive their stability from this extreme gravitational force.   Nobody knows, right now, how to make degenerate matter that remains stable at Earth-level gravities or anywhere near.  However, neither has anybody shown that this type of degenerate matter is an impossibility according to our currently assumed physical laws.  It remains a very interesting open question.

Bolonkin's Fantastic Femtotech Designs

If you type “femtotech” into a search engine, you'll likely come up with a 2009 paper by A.A. Bolonkin, a former Soviet physicist now living in Brooklyn, entitled “Femtotechnology: Nuclear Matter with Fantastic Properties”.  Equations and calculations notwithstanding, this is an explicitly speculative paper – but the vision it presents is intriguing.

Bolonkin describes a new (and yet unobserved) type of matter he calls “AB-matter”, defined as matter which exists at ordinary Earth-like gravities, yet whose dynamics are largely guided by Pauli exclusion principle based degeneracy pressure.  He explores the potential of creating threads, bars, rods, tubes, nets and so forth using AB-matter.  He argues that

“this new ‘AB-Matter’ has extraordinary properties (for example, tensile strength, stiffness, hardness, critical temperature, superconductivity, supertransparency and zero friction), which are up to millions of times better than corresponding properties of conventional molecular matter.  He shows concepts of design for aircraft, ships, transportation, thermonuclear reactors, constructions and so on from nuclear matter.  These vehicles will have unbelievable possibilities (e.g., invisibility, ghost-like penetration through any walls and armor, protection from nuclear bomb explosions and any radiation flux).”

All this sounds exciting indeed!  And the parallels between Bolonkin's diagrams and Drexler's diagrams in Nanosystems are obvious.  But nowhere in Bolonkin's fascinating thought-piece does he address the million-dollar question of how and why he thinks such structures could be made stable.

I discussed this with Steve Omohundro, a fellow AI researcher and futurist thinker who started his career as a physicist, and Steve very articulately expressed the same “common sense nuclear physics” worries I experienced on reading Bolonkin's paper:

"A standard model for a nucleus is the "liquid drop" model and it gives pretty good predictions.  Basically it treats the nucleus as a liquid with a pretty high surface tension.  The nucleons in the center are energetically very happy because they are surrounded by other nucleons attracted by the strong interaction.  The nucleons on the surface are not so energetically happy because they interact with fewer other nucleons than they might otherwise.  This creates a high effective "surface tension" for the nuclear liquid.  That's what makes nuclei want to be spherical.  And when they get too big they become unstable because the surface area is relatively larger and electrostatic repulsion overcomes the nuclear attraction.

All of Bolonkin's proposed femtostructures seem unstable to me.  His femto rods or whiskers are like streams of water which are subject to instabilities that cause them to break into a sequence of droplets.  Imagine one of his rods periodically squeezing inward and outward keeping the volume fixed.  If the surface area is decreased the perturbation will be increased and eventually break the rod into droplets.

Even if they weren't subject to that instability, there would be tremendous tensile force trying to pull the two ends of a rod together and turning it into a ball (which has a smaller surface area than the same volume cylinder).  I didn't see any suggestions for what he wants to use to counteract that tensile force."



Like me, Steve has a tendency to be open-minded about wild-sounding future possibilities.  But open-mindedness must be tempered with a bit of realism.  I'm hoping for a sequel from Bolonkin containing at least back-of-the-envelope stability arguments.

Might Dynamic Stabilization Work on Degenerate Matter?

And in spite of his initial skeptical reaction, after a little more thought Steve had a rather interesting brainstorm:

I just had a thought about how to stabilize degenerate femtomatter: use dynamic stabilization.  The classic example is the shaking inverted pendulum.  An upside down pendulum is unstable, falling either left or right if perturbed.  But if you shake the base at a sufficiently high frequency, it adds a "pondermotive" pseudopotential which stabilizes the unstable fixed point.  Here's a video of a guy who built one:



The same approach can stabilize fluid instabilities.  If you turn a cup of fluid upside down, the perfectly flat surface is an unstable equilibrium.  The Rayleigh-Taylor instability causes ripples to grow and the fluid to spill out.  But, I remember seeing a guy years ago who put a cup of oil in a shaking apparatus and was able to turn it upside down without it spilling.  So the oscillations were able to stabilize all the fluid modes at once.  I wonder if something similar might be used to stabilize degenerate matter at the femto scale?

A fascinating idea indeed!  Instead of massive gravity or massive heat, perhaps one could use incredibly fast, low-amplitude vibrations to stabilize degenerate matter.  How to vibrate subatomic particles that fast is a whole other matter, and surely a difficult engineering problem – but still, this seems a quite promising avenue.  It would be interesting to do some mathematics regarding the potential dynamic stabilization of various configurations of subatomic particles subjected to appropriate vibrations.

An inverted pendulum kept vertical via dynamic stabilization.  The rod would rotate and fall down to one side or another if it weren't vibrating.  But if it's vibrated very fast with low amplitude, it will remain upright due to dynamic stabilization.  Conceivably a similar phenomenon could be used to make stable degenerate matter, using very fast femtoscale vibrations.

Of course, such metaphorical ideas must be taken with a grain of salt.  When I think about the “liquid drop” model of the nucleus, I'm somewhat reminded of how the genius inventor Nikola Tesla intuitively modeled electricity as a fluid.  This got him a long way compared to his contemporaries, leading him to develop AC power and ball lightning generators and all sorts of other amazing stuff – yet it also led to some mistakes, and caused him to miss some things that are implicit in the mathematics of electromagnetism but not in the intuitive metaphorical "electricity as fluid" model.  For instance, Tesla's approach to wireless power transmission was clearly misguided in some respects (even if it did contain some insights that haven't yet been fully appreciated), and this may have been largely because of the limitations of his preferred fluid-dynamics metaphor for electricity.  Where degenerate matter is concerned, metaphors to liquid drops and macroscopic shaking apparatuses may be very helpful for inspiring additional experiments, but eventually we can expect rigorous theory to far outgrow them.

The bottom line is, in the current state of physics, nobody can analytically solve the equations of nuclear physics except in special simplified cases.  Physicists often rely on large-scale computer simulations to solve the equations in additional cases – but these depend on various technical simplifying assumptions, which are sometimes tuned based on conceptual assumptions about how the physics works.  Intuitive models like “nucleus as water droplet” are based on the limited set of cases in which we've explored the solutions of the relevant equations using analytical calculations or computer simulations.  So, based on the current state of the physics literature, we really don't know if it's possible to build stable structures of the sort Bolonkin envisions.  But there are surely worthwhile avenues to explore, including Steve's intriguing suggestion.

Gell-Mann Gives Femtotech A Definite Maybe

A few weeks ago, while at an event in San Francisco, I was thrilled to have the opportunity to discuss femtotech with Murray Gell-Mann – who is not only a Nobel Prize winning physicist, but also one of the world's ultimate gurus on quarks, since he invented and named the concept and worked out a lot of the theory of their behavior.  I knew my friend Hugo de Garis had briefly discussed femtotech with Gell-Mann a decade and a half previously, but that he hadn't expressed any particular thoughts on the topic.  I was curious if Gell-Mann's views on the topic had perhaps progressed a bit.

A quark-gluon plasma formed at the collision point of two relativistically accelerated gold ions in the center of the STAR detector at the relativistic heavy ion collider at the Brookhaven national laboratory.

To my mild disappointment, Gell-Mann's first statement to me about femtotech was that he had never thought about the topic seriously.  However he went on to say that it seemed to be a reasonable idea to pursue.  As a mathematician and AI guy dabbling in physics, this was a relief to me – at least the great physicist didn't laugh at me!!

When I probed Gell-Mann about degenerate matter, he spent a while musing about the possible varieties of degenerate matter in which the ordinary notion of quark confinement is weakened.  “Confinement” is the property that says quarks cannot be isolated singularly, and therefore cannot be directly observed, but can only be observed as parts of other particles like protons and neutrons.  At first it was thought that quarks can only be observed in triplets, but more recent research suggests the possibility of “weak confinement” that lets you observe various aspects of individual quarks in an isolated way.  Quark-gluon plasmas, which have been created in particle accelerators using very high temperatures (like, 4 trillion degrees!), are one much-discussed way of producing “almost unconfined” quarks.  But Gell-Mann felt the possibilities go far beyond quark-gluon plasmas.  He said he thought it possible that large groups of quarks could potentially be weakly confined in more complex ways that nobody now understands. 

So after some fun discussion in this vein, I pressed Gell-Mann specifically on whether understanding these alternative forms of weak multi-quark confinement might be one way to figure out how to build stable degenerate matter at Earth gravity.

His answer was, basically, definitely maybe. 

Then we changed the topic to AI and the singularity, where I'm on firmer ground – and there he was a little more positive, actually.  He said he thought it was crazy to try to place a precise date on the singularity, or to estimate anything in detail about it in advance… but he was sympathetic to the notion of accelerating technological change, and very open to the idea that massively more change is on the way.  And, contrary to his fellow physicist Roger Penrose, he expressed doubt that quantum computing (let alone femtocomputing) would be necessary for achieving human-level AI.  Even if the human brain somehow uses strange quantum effects in some particulars, he felt, digital computers should most likely be enough to achieve human-level intelligence.

A few moments later at the same event, I asked a young Caltech physics postdoc the same questions about degenerate matter and femtotech – and he gave a similar answer, only mildly more negative in tone.  He said it seemed somewhat unlikely that one could make room-temperature stable structures using degenerate matter, but that he couldn't think of any strong reason why it would be impossible.

Currently, it seems, where degenerate matter based femtotech is concerned, nobody knows...

Strange Matter and Other Strange Matters

Gell-Mann's comments reminded me of strangelets – strange hypothetical constructs I first found out about last year when reading about some strange people who had the strange idea that the Large Hadron Collider might destroy the world by unleashing a strange chain reaction turning the Earth into strangelets.  Fortunately this didn't happen – and it seems at least plausible that strangelets might pose a route to stable degenerate matter of a form useful for femtotech.

A strangelet is (or would be, if they exist at all, which is unknown) an entity consisting of roughly equal numbers of up, down and strange quarks.  A small strangelet would be a few femtometers across, with around the mass of a light nucleus.  A large strangelet could be meters across or more, and would then be called a “strange star” or a “quark star.”   

In a (hypothetical) strange star, quarks are not confined in the traditional sense,
but may still be thought of “weakly confined” in some sense (at least that's Gell-Mann's view)

So far, all the known particles with strange quarks – like the lambda particle – are unstable.  But there's no reason to believe that states with a larger number of quarks would have to suffer from this instability.  According to Bodmer and Witten's “strange matter hypothesis,”  if  enough quarks are collected together, you may find that the lowest energy state of the collective is a strangelet, i.e. a state in which up, down, and strange quarks are roughly equal in number. 

So where does the End of the World come in?  There are some interesting (albeit somewhat speculative) arguments to the effect that if a strangelet encounters ordinary matter, it could trigger a chain reaction in which the ordinary matter gets turned into strangelets, atom by atom at an accelerating pace.  Once one strangelet hits a nucleus, it would likely turn it into strange matter, thus producing a larger and more stable strangelet, which would in turn hit another nucleus, etc.  Goodbye Earth, hello huge hot ball of strange matter.  This was the source of the worries about the LHC, which did not eventuate since when the LHC was utilized no strangelets were noticeably produced.

One of the many unknowns about strangelets is their surface tension – nobody knows how to calculate this, at present.  If the surface tension is strong enough, large stable strangelets should be possible – and potentially, strangelets with complex structure as femtotech requires. 

And of course, nobody knows what happens if you vibrate strangelets very very fast with small amplitude – can you produce stable strangelets via dynamic stabilization?  Could this be a path to viable femtotechnology, even if stable strangelets don't occur in nature?  After all, carbon nanotubes appear not to occur in nature either.

The Future of Femtotech

So what's the bottom line – is there still more room at the bottom?

Nanotech is difficult engineering based on mostly known physics.  Femtotech, on the other hand, pushes at the boundaries of known physics.  When exploring possible routes to femtotech, one quickly runs up against cases where physicists just don't know the answer. 

Degenerate matter of one form or another seems a promising potential route to femtotech.  Bolonkin's speculations are intriguing, as are the possibilities of strangelets or novel weakly confined multi-quark systems.  But the issue of stability is a serious one; nobody yet knows whether large strangelets can be made stable, or whether degenerate matter can be created at normal gravities, nor whether weakly confined quarks can be observed at normal temperatures, etc.  Even where the relevant physics equations are believed known, the calculations are too hard to do given our present analytical and computational tools.  And in some cases, e.g. strangelets, we run into situations where different physics theories held by respected physicists probably yield different answers.

Putting my AI futurist hat on for a moment, I'm struck by what a wonderful example we have here of the potential for an only slightly superhuman AI to blast way past humanity in science and engineering.  The human race seems on the verge of understanding particle physics well enough to analyze possible routes to femtotech.  If a slightly superhuman AI, with a talent for physics, were to make a few small breakthroughs in computational physics, then it might (for instance) figure out how to make stable structures from degenerate matter at Earth gravity.  Bolonkin-style femtostructures might then become plausible, resulting in femtocomputing – and the slightly superhuman AI would then have a computational infrastructure capable of supporting massively superhuman AI.  Can you say “singularity”?  Of course, femtotech may be totally unnecessary in order for a Vingean singularity to occur (in fact I strongly suspect so).  But be that as it may, it's interesting to think about just how much practical technological innovation might ensue from a relatively minor improvement in our understanding of fundamental physics.

Is it worth thinking about femtotech now, when the topic is wrapped up with so much unresolved physics?  I think it is, if for no other reason than to give the physicists a nudge in certain directions that might otherwise be neglected.  Most particle physics work – even experimental work with particle accelerators – seems to be motivated mainly by abstract theoretical interest.  And there's nothing wrong with this – understanding the world is a laudable aim in itself; and furthermore, over the course of history, scientists aiming to understand the world have spawned an awful lot of practically useful by-products.  But it's interesting to realize that there are potentially huge practical implications waiting in the wings, once particle physics advances a little more – if it advances in the right directions. 

So, hey, all you particle physicists and physics funding agency program managers reading this article (and grumbling at my oversimplifications; sorry, this is tough stuff to write about for a nontechnical audience!), please take note – why not focus some attention on exploring the possibility of complexly structured degenerate matter under Earthly conditions, and other possibly femtotech-related phenomena such as those mentioned in Hugo de Garis's companion essay

Is there still plenty more room at the bottom, after the nanoscale is fully explored?  It seems quite possibly so – but we need to understand what goes on way down there a bit better before we can build stuff at the femtoscale.  Fortunately, given the exponentially accelerating progress we're seeing in some relevant areas of technology, the wait for this understanding and the ensuing technologies may not be all that long.
 

 

22 Comments

    As a professional chemist for many decades, I'll point out that chemists have been working in the sub-nano field since the 1800's (no, not me specifically). We like to measure in angstroms (0.1 nanometers). My associates in physics want to know when chemists are going to work on really small stuff. Point is that nanotechnology, but not the term, is very old while basic nano-science (ability to "see" and measure) is much more recent. Nano-scale and much of the related effects, has been used and (mostly) understood by major fields of science, especially chemistry and physics. The first, and perhaps the largest application of nano-materials is carbon black in tires that doubled wear life - industry knew that smaller was better, but could not directly quantify. But there are many more older examples and my favorite is nano-gold in red glassware and ancient cathedral windows - this is chem and physics. People still drink colloidal silver suspensions (medical of sorts). Anodized aluminum is a beautiful nanoscale finish and color-anodized has cleverly trapped colorants within the oxide structure. Nano should not ignore heritage of small or it will continue to do substantial reinventing instead of doing the harder work that leads to fundamental breakthroughs.

      Mark, exactly what do you object to, regarding Molecular manufacturing? Can you please explain in detail, point by point, what is possible and what is not possible? Are you claiming that carbon based diamondoid and fullerene machine parts cannot be formed? Are you claiming that machines cannot make copies of themselves under programmable control? I would like you to be more specific.

    Femtotechnology? I'm still waiting for my nanotechnology. Don't we need to develop real nanotechnology before we can think about femtotechnology?

    Mark, exactly what do you object to, regarding Molecular manufacturing? Can you please explain in detail, point by point, what is possible and what is not possible? Are you claiming that carbon based diamondoid and fullerene machine parts cannot be formed? Are you claiming that machines cannot make copies of themselves under programmable control? I would like you to be more specific.

    Considering that the "nanotechnologists" strike me as a bunch of flour-flushers, why should I take "femtotech" seriously? Drexler published his PNAS paper in 1981, i.e., 30 years ago, and yet he and his fellow Kool-Aid drinkers have nothing tangible for all the studies, conferences, computer models, stints doing "research" on nanotech at Xerox PARC, and other excuses to seek rents and waste time chasing after a mirage not that far removed from cold fusion.

    Technologies which happen to get the physics right from the beginning, by contrast, can show rapid progress. Thirty years separates biplanes from the SR-71; 30 years separates the first transistor from the Apple II; 30 years separates the first laboratory demonstration of the laser in 1960 from the clinical trials of LASIK surgery circa 1990.

    Why can't the clankety-clank "nanotechnologists" produce the goods like these real fields of engineering? Because they've based it on bad science?

    Nice article, really interesting and well written.

    Transistors *are* nanotechnology nowadays -they scale at aroud 25 nanometers or smaller- as do quantum dot lasers, nanotube electronics and many, many more technologies.
    Nanotechnology is a proven and flourishing field of engineering. I've just graduated in physics engineering and this field is promising and wonderful, I tell ya!

    I don't know what's wrong with nanotech science... care to explain?

    What is your core claim Mark? Is it that nanotechnology is impossible? The existence of biological cells proves otherwise. Is it that nanotechnology will never exist? It already exists, but is quite primitive (even if we don't talk about "soft nanotech" at all). You can deposit single atoms and look at them with a scanning electron microscope. Sure, it hasn't progressed *that* quickly, but the physics are clearly not wrong.

    Mark's previous objections to nanotech have all echoed the long answered Smalley objections. He's yet to offer anything much more than the same old "sticky fingers" argument.

    You know, one could just as easily claim the computer industry is a bunch of 'flour-flushers' using Mark Plus' approach, which is to ignore all the advances relevant to this area and focus instead on the concepts that remain, well, conceptual. This is what Plus is doing with respect to nanotechnology. The concepts outlined by Eric Drexler in 'Nanosystems' have not been physically built, it is true. But that hardly means the entire field of nanotechnology has produced nothing but vaporware. Every day materials scientists, biotechnologists, computer chip manufacturers, chemists are making progress in "the application of quantum theory and other nano-specific phenomena to fundamentally control the properties and behavior of matter" (to quote Mihail C. Rocco).

    Also, Rocco tells us what kind of nanotech is plausible for the 2010-2015 period we are currently in: "Starting around 2010, workers will cultivate expertise with systems of nanostructures, directing large numbers of intricate components to specified ends. One application could involve the guided self-assembly of nanoelectronic components into three-dimensional circuits and whole devices. Medicine could employ such systems to improve the tissue compatibility of implants, or to create scaffolds for tissue regeneration, or perhaps even to build artificial organs". Indeed, R+D in all these areas is looking quite promising.

    It is only AFTER 2015 that the field will expand to include " molecular nanosystems--heterogeneous networks in which molecules and supramolecular structures serve as distinct devices. The proteins inside cells work together this way, but whereas biological systems are water-based and markedly temperature-sensitive, these molecular nanosystems will be able to operate in a far wider range of environments and should be much faster. Computers and robots could be reduced to extraordinarily small sizes. Medical applications might be as ambitious as new types of genetic therapies and antiaging treatments".

    Saying we have not managed to produce post-2015 nanotech and therefore nanotech has failed is like saying the Wright brothers failed to achieve flight because they never invented supersonic jet fighters.

    Ben, Thanks for witing such an approachable article on a subject that could be somewhat difficult for us non-physicists. I found it quite inspiring and am left once again with great hope and the conviction of "Not if, but when" this comes to be.

    At least Richard Smalley discovered a new form of carbon molecule (buckminsterfullerene) and won a Nobel Prize. I don't consider him four-flusher, nor someone like Craig Venter. They have tangible accomplishments to show for their careers.

    "Nanotechnologists," by contast, have produced nothing but theory and glib talk about nanomachines which can make all our dreams come true, if they don't destroy us first. The nonarrival of anything resembling a nanoassembler after a generation does more to call bullshit on the whole idea than anything I can point out.

    Transistors exploit quantum mechanics, not the mechanical engineering principles assumed by Drexler's "nanotechnology."

    Even Drexler has complained how people who do work in chemistry and materials science have appropriated his neologism to make real fields sound sexier and more "futuristic," while ignoring his definition of the term.

    By the way, what would you think of transistors if someone proposed the idea decades ago, but nobody could build one? Yet we had a whole industry of people in 2011 calling themselves "transistorologists" who keep publishing theoretical studies and holding conferences about the social, economic, environmental, etc. consequences of transistors when they eventually arrive? Oh, and science fiction writers regularly employed these speculative transistors as their favorite super-science do-anything plot device?

    >You know, one could just as easily claim the computer industry is a bunch of 'flour-flushers'

    I'd say that if the state of computing resembled the history of Drexler's "nanotechnology." For example, suppose we still had no computers now, 75 years after Alan Turing's famous paper, but we had a whole industry of "computer scientists" who kept publishing theoretical studies and holding conferences about the consequences of computers when they eventually arrive. Instead we had capable mainframe computers starting a few years after Turing's paper.

    >Every day materials scientists, biotechnologists, computer chip manufacturers, chemists are making progress in "the application of quantum theory and other nano-specific phenomena to fundamentally control the properties and behavior of matter" (to quote Mihail C. Rocco).

    The application of quantum mechanics to technology started long before Drexler became a public figure. The people who work in these areas just started adding "nano" prefixes to things which had other names before then to try to make them sound more "futuristic" and draw more funding.

    Drexler, by contrast, apparently engages in something like quantum-mechanics denialism, hence the stillbirth of this imaginary technological revolution.

    Think the film "Virtuosity" but beyond glass as material. Then think an army of self maintaining android workers and servitors, self replicating highways and buildings, self building vehicles, aricraft, self mining/self harvesting commodities.

    Humanity can finally focus on quality of life and innovation, instead of merely survive or rush to office or wallow in gross meaningless materialism. Democracy must be tempered with equitable wealth distribution as in the link below :

    http://www.facebook.com/group.php?gid=36665503866

    148.93 million km2 can support 30 billion if re-distributed as is because of the following facts sourced from various sites on crop yields for an average fertility 0.4 hectare or 1 acre produces :

    15kg(saffron)
    50kg(venison/cardamon)
    100kg(honey/cinnamon/pepper)
    200kg(beef/cocoa/soy)
    600kg(mutton/lamb/nutmeg/chilli)
    600kg(wheat)
    1000kg(vanilla)
    1500kg(fish/fowl)
    3000kg(rice/corn)
    5000-10,000(various fruits)
    10,000(rye)
    25,000kg(potatoes/some nuts)

    ;of produce yearly for high density farming low density figures are many times less. Each person eats 300-500kg of a variety of food per year.

    A 0.4 hectare can in fact support from 4-40 people depending on density or the quality of food required.

    Don't expect anyone to be able to eat 1 solid kg of meat every day either (not counting on going dairy produce - a single sheep/goat produces up to 2 litres daily, a cow produces up to 15 litres daily - or 1.5 kilos of cheese daily PER animal.

    These figures in fact completely ignore the 70% ocean that can also contribute food source and pelagic living space as well.

    Funny, I would consider this as "resembling a nanoassembler": http://www.physorg.com/news/2010-12-voyage-dna-treader.html

    Mark, I am quite well aware that you seem to have a need to believe that Nanotech is impossible. You've made more than enough posts across H+ to display what appears to be a pathological fear of nanotech's dangers, and so have clung to every outdated argument that has been made for the last 20 years to try and keep believing that "It'll never happen."

    But you're using claims that have been disproven. Merkel has demonstrated that mechanical "gears" are more than possible, and Drexler pointed out *IN* EOC that his "mechanical computers" were little more than thought exercises and that "real" nanocomputers were likely to be electronic in nature. Smalleys objections were answered, and despite all those objections, researchers are just going right along making more and more sophisticated self-assembling machines. Yes, those machines are very simple. So was the first "computer."

    But the difference between Nanotech and computers is very simple. Computers had a massive amount of the basic research already done. Babbage had centuries of increasingly sophisticated gear works to draw on, and the first computer had decades of electronics research to build on. Nanotech has had to build EVERYTHING from scratch. It's not a new field drawing on a vast body of pre-existing technology, but one in which the very concept required new ways of looking at previous fields of research. The same goes for AI. The leap from electronic circuits to logic gates and thus computers was rather small compared to the leap from chemistry and biology to Nanotechnology. We understood the basic principles of electronics and computing. We did not and still do not know all the basic principles of either chemistry or biology.

    This more or less makes your argument similar to the claims of "we'll never be able to fly because they've been trying since da Vinci with no success" as the Wright brothers were assembling their first airplane.

    You can continue to cling to your arguments, and your fear, but they are going to continue to be less and less based on reality as time goes by.

    Nice, though kind of unrelated. And you're missing the point. All those foodstuffs could be manufactured via nanotech food-fabbers from basic CHON materials (plus essential micronutrients) from any source. If you want a world fed and powered by solar, then we can do much better than the ~0.1% energy storage efficiency of living things. Every person on Earth could be fed and serviced from an associated tank of raw-materials feeding into the right nano-tech system. We might need to get used to the idea of throwing our things back in the fabber to be reconfigured, but I am sure we could adapt.

    In that situation, then how many can be accomodated on Planet Earth? Personally, given the prospect of bulk carbonoid materials of near diamond strength materials, I like Arthur C. Clarke's concept from his "3001: The Final Odyssey" of the large inhabited towers reaching up to geosynchronous orbit. Assuming 3.5786 metres per level, then each tower is 10,000,000 levels high. Assuming a lateral cross-sectional area of ~ 1 km^2, then 3 towers arranged equidistant around the equator represent 30,000,000 sq.kilometres of accomodation. Giving each person a generous 1,000 square metre allotment, then allows the 30 billion previously proposed to be accomodated with minimal use of terrestrial landscape. Of course a wider set of Towers can squeeze more in. 300 billion? 3 trillion?

    How much energy do they need? Connected to geosynchronous orbit directly the Towers might be powered entirely from the bounty of the Sun directly. Supporting 3 trillion at the 10 kW/person level - the energy equivalent of 100 times a person's recommended caloric intake - means 30,000 TW is required. While this is a full quarter of the light absorbed by Earth the collector arrays need only be ~8.4% the area at 50% efficiency due to the near perpetual sunlight at that great remove from Earth's shadow.

    I think the situation is more like the successors of Babbage realising that they weren't going to make the Analytical Engine with gears and steam-age technology, but electronics. That we're working with self-organizing biomolecules rather than Drexler's molecular gearings doesn't make it any less nano-tech, just not quite what people expected when it all began. Who knows what the equivalent will be when we jump from present day vacuum tubes to transistors and printed circuits. Or are we still at the electromechanical stage?

    The various applications femtotechnology are delineated on Bolonkin's website http://femtotechnology.homestead.com/

    As far as stability see http://nextbigfuture.com/2010/09/femtotech-speculation.html

    Aren't the components and machinery inside a cell all nanotechnology?

    True, each component is not a constructed artifice, but this does not mean that they cannot be constructed (and at the Foresight Nanotech Conference in 2009 there were a few presentations where nano-engineers had succeeded in constructing several of these mechanisms formerly limited solely to biology).

    So, it isn't true that nanotech is hardly developed. It is in a primitive stage of development similar to that of the 1910-20 period of aviation, where the basics were understood by the mechanisms by which various solutions were reached were not all discovered yet.

    I think it would help enormously if Mark Plus was more precise in what he is trying to debunk. It clearly is not the entire field of nanotechnology, but rather the conceptual designs of Eric Drexler as featured in books like 'Engines Of Creation' and 'Nanosystems'.

    An analogy can be made with flight, I think. If someone points out that a particular design for a flying machine cannot possibly work, they might be right (you would need expert knowledge in the scientific and engineering principles to know for sure). But if someone tells you 'flying machines are impossible' you need no expert knowledge in either the scientific or engineering principles to see this assertion has to be wrong: Nature itself has proved that machines can be made to fly.

    I think Plus does the critics of Drexlerian nanotechnology a disservice when he makes sweeping statements like '"nanotechnologists" strike me as a bunch of flour-flushers'. Statements like that are open to easy counters, in the form of numerous examples of natural 'machines' built and operating at the molecular scale. But, if you read what people like Richard Jones are saying, they clearly understand that nanotechnology in some form or other can (indeed, will) exist. They just do not think the particular designs Drexler outlined are possible.

    OMG - Children calm down -
    we all have opinions about tech and it's possible implications, uses etc, but unless you are an expert / peer reviewed and shown to be right, then none of you have the moral high ground in these types of forum stand offs. So please accept that opinion is all that you have on most of these matters and learn to play nice.

    Signed Human + regular reader.

One Trackback

  1. [...] [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community