Best of H+: X-Tech and the Search for Infra Particle Intelligence

This article continues a series of three previous articles on femtotechnology, femtocomputing and DNA-style femtocomputing.

Computers are getting smaller and smaller by the year — today’s mobile phones are more powerful than yesterday’s mainframes.  But we’re nowhere near the physical limits of miniaturization.  In previous essays I explored the possibility of femtotechnology and femtocomputing.  Here I will take some further steps in that direction, elaborating on attotechnology and beyond to zeptotechnology.   And I will unravel some of the broader consequences of these technologies, which I collectively label “X-Tech”.   X-Tech provides a potential solution to the Fermi Paradox (“where are all the nonhuman civilizations?”) … maybe they’re not out there living on other planets, but rather living inside atoms and particles!  Perhaps we should be looking inside “elementary” particles because creatures constructed at these tiny scales would operate hugely faster, at far greater densities, and with vastly superior performance levels. We may need a paradigm shift away from outer space to inner space, from SETI to SIPI — the Search for Infra Particle Intelligence!

I apologize that, like its prequels, this article has some parts that might be tough going for the reader who doesn’t remember any of the elementary particle physics they learned in kindergarten.  If you fall in that category, this brief tutorial webpage will help bring you up to speed.  But if you really don’t have time for the details, feel free to skip to the end where I  muse a bit about SIPI….

Beyond Femtotech to Attotech

In my prequel article, “FEMTOTECH : Computing at the Femtometer Scale Using Quarks and Gluons” I found (in principle) ways to use the properties of quarks and gluons to compute at the Femtometer scale.  If you haven’t read that essay yet, I suggest you go back and take a look now, or else the details of this one may be a bit tough to follow!   The basic ideas I presented there were to store a bit using the color charge on a quark, e.g. the red color charge for a 1, and the blue color charge for a 0. Gluons, having two colors each, e.g. a red, anti-blue, or a blue, anti-red could be used to change the color charge  of a quark, i.e. it could flip the bit from a red to a blue (i.e. from  a 1 to a 0) and  vice versa. By sequencing the emission and absorption of appropriate color changing gluons, I was able to map the three basic logic gates of classical computing (NOT, OR, AND) into corresponding QCD (quantum chromodynamics) phenomena (i.e. the appropriate behaviors of quarks and gluons.)

I then wondered if I could do much the same at the next size scale down, i.e. at the attometer (10-18 meter) using the weak force particles W, and Z. As usual I went hunting through my particle physics texts and found a way to do this too (in principle). Here is how it might be done.

The force particles of the weak nuclear force (W+, W and Z0) have very limited range, typically of the attometer scale (10-18 m) and they have mass, in fact they are considerably heavier than the proton, i.e. about 2 orders of magnitude heavier. I concentrate on the W particles, since I won’t be using the Z particle which is uncharged. The W particles come in two forms. One is positively charged (“W+”) (i.e. it has one unit of positive electronic charge) and the other is negatively charged (“W”) (i.e. it has one unit of negative electronic charge.)

These weak interaction particles differ from their massless cousins, the photon, and the gluons, that mediate the electromagnetic and the color (strong nuclear) forces respectively. The gluons only interact with particles that have the color charge (red, blue or green), but take no part in the weak interactions. The gluons thus have no effect on the “leptons”, i.e. the light particles (such as the electron, the neutrino, and their cousins) but react only with “hadrons” i.e. heavy particles (such as the “baryons” (protons and neutrons, etc, that are constructed from 3 quarks), and the “mesons” (pions, etc) that are constructed from a quark and its antiquark.))

The weak interaction particles however are more universal. They can interact with BOTH hadrons and leptons. So from a future technologist’s point of view, there may be more scope for a technology based on the weak force particles than  on the color force particles, because the former offer more “scope.” However, there is a considerable downside with the weak interaction particles, and that is their interaction speed, as we will see later in this essay.

The weak particles have another feature that the gluons do not have, and that is they can change the “flavor” of a quark, whereas a gluon cant. (The “flavor” of a quark, is its “type” i.e. one of the following set of 6 {up, down, strange, charm, top, bottom}) A gluon can only change the color charge of a quark, not its flavor.

For example, a W+ particle can interact with a “down” flavored quark (which has –1/3 of the charge of an electron) and convert it into an “up” flavored quark (which has + 2/3 of the charge of an electron). This reaction (actually an absorption of a W+) can be represented as follows.

W+ : Qd => Qu

i.e. the W+ acts on the down quark to convert it to an up quark, or the down quark absorbs the W+ and becomes an up quark. Call this process the “absorption” of the W+

There is a corresponding “emission” process of the W+, which goes as follows. An up flavored quark collides with a hadron (which is ignored) and “splits” into a down flavored quark and a W+. This reaction can be represented as follows.

Qu => Qd + W+

Note that in both the W+ absorption and the W+ emission processes, the total charges before and after the interactions are conserved. For example, in the W+ emission process, the charge of the up quark equals the charges of the down quark and the W+ (i.e. +2/3 = -1/3 + 3/3).

The above absorption and emission processes have corresponding equivalents for the W weak force particle. The absorption and emission processes can be represented by the following two equations.

W : Qu => Qd     and    Qd => Qu + W

We now have the necessary tools to create computation at the attometer scale, because we can emit a W+ or a W- at will, and use it so that it can be absorbed to change the flavor of a quark from up to down, or from down to up, i.e. we can change the bit on the quark. For more on this, see the next section.

Computing with Quark Flavors and the Ws

Once you understand readily the basic principles of the computational model in the femtotech essay, it is rather easy to create a close analogy between the quark color charges and color changing gluons on the one hand, and the quark flavors and  the  flavor changing Ws on the other.

You can take the implementation of the NOT, OR, AND gates in the femtotech essay and substitute the femtoscale set of {quark color charges, color changing  gluons} for the attoscale set of {quark flavors and the flavor changing Ws}. The analysis is the same for both sets, except that the scale is about 3 orders of magnitude smaller, namely at the attometer scale. We are thus computing at the attometer scale, creating the beginnings of an “attotech.”

To make the above analogy more concrete, make the following substitutions. On the LHS of the following list, is a component of the femtotech story. On the RHS is a component of the corresponding attotech story. To understand how to implement the NOT, OR, AND gates of the attotech story, simply substitute the RHS components into the corresponding LHS components in the femtotech story.


A red color charged quark  —> An up quark

A blue color charged quark —> A down quark

A red, anti-blue gluon —> A W+ weak force particle

A blue, anti-red gluon —> A W weak force particle


I had to go to Google to find out the name of the 10-21 m scale. It’s a “zeptometer” and 10-24 m is called a “yoctometer.” I can’t say much about a possible “yoctotech” because very little is known about particle physics at such tiny scales. But there are theoretical models at the zeptometer level.

Once the electromagnetic and weak forces were unified into the “electroweak” force in the 1970s, particle theorists attempted to use group theory to create a “grand unified force” or “superforce” (i.e. an “electro-weak-color force”) using the special unitary group SU(5). Out of this (only partially successful) work came the prediction of two superforce particles that would mediate all the interactions, except gravity. (Hopefully string theory will solve the open problem of unifying ALL the forces of nature.)

These two superforce particles are called the “X” and the “Y”, and have masses about 3 orders of magnitude larger than the Ws and the Z. Hence the range they would operate under would be at the zeptometer scale. Since I don’t know how they interact with quarks and leptons, there’s not a lot more I can say about a possible zeptotech.


Having seen above how to implement computing at the femto and atto scales, one cant help but notice that there are commonalities between the two, allowing one to generalize the creation of future “X-techs” at ever smaller scales, where “X” is  any appropriate level label, e.g. nano, femto, atto, zepto, yocto, … Plank(?).

To produce an X-tech, what do you need? Here is a short list. You need –

a) Stable entities (presumably particles) that have at least 2 quasi stable states (that do not quickly decay or spontaneously change into some other state). For example, with femtotech, the red and blue color charges on the quarks (corresponding to the binary states “1” and “0”), or with attotech, the up and down flavors of the quarks.

b) At least two force particles that can be emitted and absorbed, to be used to change the states of the entities above. For example, with  femtotech, the red, anti-blue gluon (that changes a blue charged quark into  a red charged quark), and the blue, anti-red gluon (that changes a red charged quark  into  a blue charged quark), and  for attotech, the W+ (that changes a down flavored quark into an up flavored quark) and  the W (that changes an up flavored quark into a down flavored quark.)

Once you have such basic ingredients, it is fairly “easy”, at least in principle, to devise logic gates using them to compute any Boolean function. You have computation at the X-scale, and hence you have the beginnings of an X-tech.

Is Smaller Faster?

I used to be under the impression (and I still am to some extent) that as one scaled down, the greater would be the overall performance of a given level of technology. For example, a femtotech would outperform a nanotech by a factor of a trillion trillion, i.e. the density would be a million cubed times greater, and the signaling speed between the femto components would be a million times faster, because the components are a million times closer together, giving a total performance increase of a million to the fourth power = a trillion trillion times superior to nanotech. A femto machine could flip bits a trillion trillion times faster than a nano machine.

This huge superiority will put pressure on our future “artilects” (artificial intellects, massively intelligent machines) to “upgrade” (actually “downgrade”) themselves from “nanolects” (i.e. artilects based on nanotech) to “femtolects” (i.e. artilects based on femtotech). Since they will be hugely smarter than we are, they will probably then continue down scaling, assuming that each scaling down resulted in vastly superior performance, but is that assumption valid?

Let’s do the numbers.

The femtotech story in the previous essay uses gluons to mediate the color force. Typically these color force interactions occur in a time frame of 10-23 second, and at a range of about 10-15 m.

The attotech story in this essay uses the W weak force particles to mediate the weak force. Typically these weak force interactions occur in a time frame of 10-10 second, and at a range of about 10-18 m.

You may be shocked by the much slower speed of the weak interactions, i.e. about 10 trillion times slower, so how does that affect the total performance of attotech vs. femtotech?

Well, badly, actually! Of course, the density increase of atto relative to femto would be a billion times greater, i.e. a thousand cubed, but the much slower interaction speed more than overpowers the greater density impact. So, there may be 109 times more components per unit volume of attoteched matter, but each interaction is 10-13 times slower, so the total performance increase is 109 * 10-13 = 10-4, i.e. ten thousand times inferior. So much for an attotech based on the weak force!


Does the above analysis throw a monkey wrench into the works of the idea that the hyper intelligent creatures in the universe that are billions of years older than we are, are super tiny? Not necessarily. Of course, going from nano to femto is an obvious performance enhancer, because not only is the femto scale a million times smaller than the nano scale, it uses the color force, one of the fastest phenomena that physics knows about (although big bang theory operates on a time scale of 10-44 second, the Planck time, so maybe there is scope for much faster processes?  Further research needed here! Inflation, maybe?

Sticking with the color force – its essential ingredients, the quarks and the gluons, are essentially point like particles. They have no known internal structure, so are modeled essentially as points, so could scale down hugely.

The only reason the distance scale of 10-15 m is used for the color force, is because quarks are usually found bound together in 3s in baryons (like the proton or neutron) which have femtometer sizes. But if an X-tech could be used inside the volume of a baryon, i.e. inside a sphere of radius of about a femtometer (a.k.a. a “fermi”) then it might be possible to have many quarks and gluons operating in that space and at tiny tiny scales. Alternatively, one could use gluons alone, since they too are color charged and may interact with each other forming complex “glueballs” that function at color force speeds, i.e. 10-23 second.

It’s difficult to talk about such an X-tech because the basic physics of how lots of quarks and gluons (or glueballs) would interact in such tiny volumes (where presumably, the “quark confinement” (stretched rubber band) phenomenon would not be operating (see my earlier essay on this)) is poorly known.

Our artilects may be such superb scientists, that they may be able to create such an X-tech, and hence give themselves the option of “down-grading” themselves to achieve vastly greater performance levels. Just how far down they could go is an interesting research question. One of my ambitions over the next couple of years, is to get so familiar with string theory that I dream of creating a “string-tech.”

Now to the punch line.

It should be clear from all this talk of femtotech, attotech, zeptotech, X-tech, etc that as one scales down, in general, performance levels increase dramatically. Hence one can readily speculate that any nano-based artilect, sooner or later, will not be able to compete with his femto-based cousins, and will probably downgrade itself as well. This logic applies all the way down (to Plank-tech?). Hence we come inevitably to the following dramatic conclusion.

The hyper intelligences that are billions of years older than we are in our universe (which is about 3 times older than our sun), have probably “downgraded” themselves to achieve hugely greater performance levels. Whole civilizations may be living inside volumes the size of nucleons or smaller.

When I first had this idea, about a decade ago, I chuckled, but now I take it very seriously, because there seems to be so much logic behind it.

Similar ideas have been floated before — see for example John Smart’s essay on related themes, The Transcension Hypothesis — but as I work through the details of X-tech, the viability of this direction becomes clearer and clearer.

What impact does such thinking have upon SETI (Search for Extra Terrestrial Intelligence)? Well, I think it makes SETI look rather provincial. I’m not suggesting that the SETI effort be canceled, but the above thinking does suggest that the intelligences “out there” i.e. extra terrestrials (ETs), who might be primitive enough to bother sending radio signals to beings like us, are NOT the most intelligent specimens in the universe. The really smart ones I suggest are very very tiny.

Therefore I recommend that humanity start thinking about ways to detect their presence. We need a SIPI, a Search for Infra Particle Intelligence. For example, why are the elementary particles such “carbon copies” of each other, for each particle type? Once one starts “seeing” intelligence in elementary particles, it changes the way one looks  at them, and the way one interprets the laws of nature, and the interpretation of quantum mechanics, etc. It’s a real paradigm shift away from looking for non human intelligence in outer space, to looking for it in inner space, i.e. SIPI.


This article was previously published on: Nov 25, 2011

14 Responses

  1. mike says:

    The alleged “tutorial” on particle physics is anything but a tutorial; it assumes specialized jargon as an everyday vernacular and does very little to communicate the world of QM. I’ve searched high and far but I’ve yet to find anybody talk eloquently about QM, utilizing an everyday-ness to bridge the gap between QM and the lay person. This is a shame because I would love to get in on the conversation.

  2. Nicholas says:

    Personally I believe scale remains consistent regardless of how the inner workings change. Look to the past to predict the future. Computers have changed alot but even though we could make them smaller we do not. Instead we simply cram more into the same space for greater performance. One would think we’d build larger computers then to contain more, again we do not. Instead we always aim for a balance.

    An issue of course is this is using human thinking. These aliens are not human, in a few hundred years we might not be human. So I am not saying it isn’t possible. Just that I feel this would be used to make technology more powerful and ubiquitous, but the civilizations would maintain its macro scale.

  3. Miniaturization is what drives Moore’s law, so well, any viable X-tech, we ought to be seeing before 2050-60 when supposedly physical limits to computation would have been reached (investigated by our quantum computation overlord Seth Lloyd). Well, what’s that going to be, obviously planck-scale tech, that would be the ultimate as far as we know. 🙂 Though, surprisingly, some physicists posit a quantum foam even *beneath* planck-scale!

    What planck-scale computing means is that you can manipulate the “real” atoms of the space-time lattice, the most fundamental units of physical existence. Assuming of course digital physics is correct, but hey, we know it’s correct at an intuitive level 🙂

  4. Molebatsi says:

    Very interesting!These X-technologies might even downgrade humanity’s consciousness into new forms of electricity,that is, they might inevitably expose humanity to new electrical dimensions which,like electricity,are sub-natural in essence.Sometime during the 21st century,computers will reach a certain level of sophistication,where their hardware will begin to generate a cloud of supersoul consciousness,a form of superior intelligence(supermind level of being).This strange cloud will serve as portal through which any non-physical being(lower astral and/or sub-natural) can take residence in a computer and thus parasitise the system.I,Robot the movie eloquently epitomises this phenomenon,ghosts in the computers and the rise of machines(Terminator the movie).This is in line with the Clairvision mapping of the future(

  5. Earo16 says:

    Well now. If I’m understanding this correctly, then it’s possible to have two bits per quark at a time–one for the color charge, and one for the flavor. Given that attotech is a bit slower than femtotech, I suppose it would be logical for data to be stored at the attotech level and processed at the femtotech level…the molecules doubling as both the processor and the hard drive.

    Of course, there’s the idea that a lot of processors working in parallel (such as the human brain) are faster than a lesser amount of processors working at a higher latency (such as a personal computer). But, of course, it would all depend on the information being processed; we handle thoughts and feelings as well as computers handle complex algorithms and instructions. So I dunno. Only time and technology will tell.

  6. Maybe it’s a big loop. You keep getting smaller and smaller and smaller and then end up where you started, like going around a great circle of a sphere, but in orders of magnitude rather than space. A different way of conceiving a fourth dimension: scale.

  7. Interesting.

    I, too, have argued that hyperintelligent beings might be “hard to see” (One Singularity Or Many, in H+, November 10, 2011). But it never occurred to me that this might be because they were genuinely microscopic.

    I’m also reminded of Arthur C. Clarke’s _Profiles of the Future_ . You’ll recall that he argued the exactly opposite position and insisted intelligent beings might grow larger but not smaller.

    victor s

  8. gt says:

    Love these ideas. Previously, I had liked the idea of Matryoshka stars as a logical possibility for the evolution of life and intelligence. This theory also had an explanation for the Fermi Paradox.
    However, going small instead of going so BIG may actually make more sense. It’s more elegant and even more of a radical idea. Which is a good thing. Because my sense is that no matter how radical we can imagine such post singularity type entities, our imagination will come up ridiculously short.

    I can’t help wonder what the hell such “civilizations” would be up to way down there.
    The idea that they might go into black holes also makes sense. Are they running simulations? Are we a simulation running inside a subatomic particle in another universe 😉 I think there is a scene in Animal House where a freshman is stoned with a professor musing on such things.
    How would such civilizations relate to us?
    Would they intervene in our evolution
    (I suspect that they would. Why wouldn’t they if they had the ability to nudge us towards further more interesting information and creativity rich evolution, wouldn’t they do that and perhaps nudge us away from self annihilation?) Just having fun here.
    And how would they be involved in the fate of our universe as a Whole? Would they as Frank Tipler speculates want to unite the whole universe into an infinitely powerful, intelligent Omega Point? Maybe the Whole Universe in already united into some kind of Super Intelligence through quantum entanglement and the Omega Point is already here in some way. What would be the agenda of these Infra Particle Intelligences? I think of Kevin Kelly’s “What Technology Wants.”
    Life/Technology seems to have certain tendencies baked in and this could give a some clues. The “Technium” (Kelly’s word)
    for example wants more efficiency, mutuality, evolvability, diversity, freedom, etc.
    Anyway, just lit up by the idea of infra particle intelligence. Thanks for the article 🙂

  9. If it is possible to scale down to a level where you wouldn’t get destroyed inside a black hole, it would be a logical place to go to get more matter. After all, I reckon sentient beings would want to be where resources (raw matter) are plentiful and dense. So, maybe x-tech civilizations drop themselves inside black holes and use them (the black holes) as matter/computronium collectors.

  10. Jason says:

    Planck scale, not Plank. You should familiarize yourself with this seeming boundary of measurements as you explore string theory and downsizing.

  11. hannes says:

    reminds me of ‘blood music’ (1985) by Greg Bear. In this sf-novel he explores a world of bacteria sized intelligence’s that eventually downgrades to still smaller scales.


  1. February 12, 2014

    […] The relevant essays are de Garis, “Femtotech: Computing at the Femtometer Scale Using Quarks and Gluons” (full text available at and “X-Tech and the Search for Infra Particle Intelligence” (available at h+ Magazine). […]

  2. February 21, 2014

    […] papadakis This article continues a series of three previous articles on femtotechnology, femtocomputing and […]

Leave a Reply