Sign In

Remember Me

X – teched creatures billions of years old

Screen Shot 2014-06-19 at 9.27.26 AMIn earlier essays I conceived the idea of X-Techs, i.e. technologies at the “X scale”, where X could be femto, atto, zepto, etc. Scaling down a technology by a factor of a thousand would increase the total performance of that technology by a factor of 1000 to the fourth power, i.e. a trillion, since the density would increase by 1000 to the third power, and the inter-component signaling speed would increase by a factor of a 1000, since the inter-component distances are a 1000 times smaller. Hence “smaller is faster.” This line of thinking led me to the notion of  SIPI (Search for Infra Particle Intelligence) rather  than the usual SETI (Search for Extra Terrestrial Intelligence) based on receiving radio signals from creatures similar to ourselves at a similar development  level, which strikes me as being rather provincial minded.

The next logical step, it seems to me, is to speculate on what hyper intelligent synthetic creatures (artilects), which are x-tech based, might have done with themselves over billions of years, given that our sun, our star, is billions of years younger than most stars in the observable universe. This is a fascinating question, which this essay attempts to address.

How does one begin on such a speculation, given that these hyper intelligences would have performance levels trillions of trillions… of times above the human level, and have had billions of years in which to evolve and complexify, before our sun was even born?

One place to start is to estimate their levels of complexity. One of my research topics is “Planck-tech” i.e. a technology at the Planck scale, 10 to power -35 of a meter, which is 10 to power 27 times smaller than our current nanotech scale. 27 is 9 times 3, so a Planck-tech could outperform our current nanotech by a factor of a trillion to the 9th power, i.e. (1012)9 which equals 10108  which in words is more than a googol. (A googol is 10100). So a Planck-tech could “googolify” our current nanotech in relative performance levels. I chose Planck-tech since it is the smallest scale that main stream mathematical physics has conceived of, i.e. string theory. These googoled artilects (“googolects”) would be veritably god-like compared to human beings, which brings me to my first major speculation, besides their vast performance superiority and their tiniest of sizes.

If one takes googolects seriously, then given that their scale is the same as strings, it seems reasonable to suggest that they could manipulate the properties of strings and related M-theory objects into structures of vast complexity, i.e. these structures would have a complexity level googol times greater than today’s artificial brains. These googolects would be “thinking” (signaling) 1027 times faster than our current nanoelectonic circuits, since they are 1027 times smaller (assuming the speed of light remains a barrier.)

If these googolects can manipulate M-theory objects as they choose, then at larger scales, e.g.  at our  own human  scale, we would not be able  to  distinguish between properties of the higher scales as “givens” (as is the case in physics today) rather than as “engineered”. Thus, it is possible that a real paradigm shift becomes quasi inevitable, namely that when we  study the  properties of matter at the tiniest scales we may be studying properties that have been engineered, designed, manufactured. This would make these googolects “gods” because they have “created the universe”. Of course, if these googolects are the result of billions of years of evolution, starting with biological evolution, then transitioning from biological to artilectual evolution, then finally scaling down ever smaller to reach the smallest possible (?) scale, the Planck scale, then obviously, they will be billions of years ahead of us, since we humans may be on the same “growth curve.” Incidentally, one cannot but notice that studying googolects, leads one almost inevitably into religious questions.

Before I start speculating on other things these googolects might do, this is probably a good moment to coin a label for a new research area that does just that, i.e. speculates on what googolects might do. I suggest “googolectics.” Since two heads are better then one, and many heads are better than two, I can only hope to break the ice in giving my own few suggestions and contributions to googolectics.

Another of my research interests, is something I call “I.T.” i.e. intelligence theory, that doesn’t exist yet. This would be a branch of mathematics that underpins the space of intelligences, where our human level intelligences would be only a small subspace of that superspace. Einstein’s and von Neumann’s intelligence data points would be only marginally different from the data points in this space of ordinary humans. Intelligence space would include all known types of creatures, ranging from the single cell, to ourselves, and as artificial brains are developed in the near future, their data points could be added to the intelligence space. What I hope will come out of this I.T. will be an understanding of what it takes to create an intelligence level superior to some  basis point. In other words, we will know what it is that generates superior intelligence.

Once I.T. can tell us what intelligence is, so that we can have a whole mathematical theory about it, then we will be able to create more intelligent creatures (artilects) simply by providing their forbears with more of what I.T. tells us intelligence is. For example, say in the near future, neuroscience discovers that intelligence goes up with higher inter-neural signaling speeds, and a greater number of synaptic connections between neurons. Perhaps once whole human brains can be mapped in detail (all neurons, all synapses and their strengths etc) then correlations between these properties and individual intelligence levels may be discovered. This is highly likely. This knowledge will feed into the development of I.T. It might then  be possible  to make vast extrapolations up the parameter graphs of artificial  intelligence levels, before we run into conceptual limits,  whatever they end up being, i.e. hitting up against  the limits  of a given  intelligence model, before having to jump to a newer superior  model.

One obvious point to make that seems virtually certain, is that googolects would be utterly incomprehensible to humans. We would be way too stupid to understand their godlike capabilities. In sheer quantity of knowledge terms alone, the googolects would outperform us by amounts which we might try to calculate now. Let us assume that at human level intelligence and above, total knowledge generated by the species doubles every year. With this ultra conservative assumption alone, the googolects total knowledge would be 2 to the power of several billions superior to ours, given that their stars are on average billions of years older than our star. Of course, their knowledge doubling rate would be far higher, given their vastly superior thinking speed.

My main two areas of intellectual interest are pure math and math physics. Let us assume that pure math has no conceptual limits, so that googolects could keep exploring the frontiers of pure math without limit. Perhaps they might spend their time exploring the implications of math hypotheses without limit. If these googolects are capable of manipulating nature at the tiniest scales, then they could apply their godlike math knowledge to the creation of the “laws” (blueprints) of the universe(s) they create, which might explain why today’s math physicists are intrigued to discover that the more they explore the ever smaller scales of the universe, the more complex is the math that is needed to describe them.

For example, why on earth are the elementary particles classifiable by Lie algebras (discovered in the 1960s) and why does the largest simple group (the “Monster”) form the basis of a 26 dimensional string theory (discovered in the 1990s)? Imagine this trend continues. At what stage would math physicists have to throw in the towel and simply abandon their unconscious assumption that they are discovering the properties of the universe (i.e. discovering what is) rather than accepting the idea that they are actually learning how googolects have “engineered the universe?” I suggest that only a few more such math physics revelations would be needed before a paradigm shift becomes inevitable. The “mathematical principle” (i.e. that the universe is too fantastically powerfully based on mathematics, to be a coincidence) would have to be accepted, and hence math physics would have found evidence that our universe has been designed.

This leads one to the suggestion that by studying the math properties of the universe at ever deeper and ever more intellectually demanding scales, we can come indirectly to some knowledge about the capabilities of the googolects. We would know for example, that they were great mathematical engineers, building their constructs based on beautiful and powerful mathematics, that today’s math physicists are already discovering.

I have only scratched the surface of this fascinating new research topic. I hope that some of the above ideas will stimulate you to criticize them, and to go far beyond them, and thus establish firmly a new research field, as well as a new math physics “religion” of “googolectics.”

Playlist of Other Interviews with Hugo de Garis


  1. I highly recommend checking out Brian Whitworth’s virtual realty model, which posits plank-scale, fundamental (data) units. And when you’re done checking out Whitworth, check out Tom Campbell, a NASA physicist and engineer who takes it a step further, and posits a “larger consciousness system” as the ‘initiator’ of the VR simulation in which we are fully immersed.

    Here’s a link to Whitworth:

    Tom Campbell’s “My Big TOE” (Theory of Everything) can be found on his site. He is also all over YouTube.

  2. We marvel that the Universe reflects this apparently human invention we call “math”. This is back-to-front IMO. We reflect the Universe via this perception of its order that we perceive via math. We’re a reflection of the Universe’s order, not its makers. So why are we surprised?

  3. If physical laws and elementary particles are engineered, what’s the propagation speed for changing spacetime laws? Are perhaps the laws different in various parts of the universe?

  4. I think the last thing our civilization needs is another religion. Physics is physics, we only use math to model it and make predictions. Even Carl Sagan speculated that if there were a Creator, it would hide clues deep within mathematical consants (he used Pi as his example). But even his speculation included a nul hypothesis; if we don’t find something there, the hypothesis is invalid. We can speculate about intellegence at the Planck scale, but it need not be religious.

    Math = Intellegent Design, really?

    • If we are going to talk logic, then Descartes’ Cogito Ergo Sum pretty much refutes any ontological or epistemological argument that can be made in favor of materialism. There doesn’t have to be a “god” for there to be a larger consciousness system that is responsible for the rule-set of (so-called) “physical” reality (our physics).

Leave a Reply