# Infinity Point Will Arrive by 2035 Latest

During writing a paper for the 100 Year Starship Symposium, I wished to convince the starship designers that they should  acknowledge the dynamics of high-technology economy, which may be crucial for interstellar missions. Thus motivated, I have made a new calculation regarding infinity point, also known as the singularity. According to this most recent revision of the theory of infinity point, it turns out that we should expect Infinity Point by 2035 in the worst case. Here is how and why.

Infinity Point was the original name for the hypothetical event when almost boundless amount of intelligence would be available in Solomonoff’s original research in 1985 (1), who is also the founder of mathematical Artificial Intelligence (AI) field. That particular paper gave a mathematical formulation of the social effects of human-level AI and predicted that, if human-level AI were available on a computing architecture to which Moore’s law was applicable, then given constant investment in AI every year, a hypothetically infinite amount of computing capacity and intelligence would be reached in a short period of time. His paper explained this event as a mathematical singularity, wherein the continuity of the computing efficiency function with respect to time was interrupted by an infinity. The term singularity was popularized later by science fiction authors and other researchers who favored the concept such as Ray Kurzweil. I encourage the readers to immerse themselves in the vision of the technological society in that paper, which predicts many other things such as application of psycho-history. In person, Solomonoff was every bit the pioneer of high technology and modernism his ideas revealed him to be. For he told me that he had proposed the idea of a machine that is more intelligent than man in 1940’s, much earlier than Dartmouth conference. If there were ever a true visionary and a man of devotion to future, he certainly fit the bill. Thus, he was not only the first man to formulate the general solution to AI, and to lay out the mathematical theory of infinity point, but also the first scientist to speak of the possibility with a straight face (however, similar ideas were conceived of in science fiction before).

The original theory arrives at the Infinity Point conclusion by making a few simple mathematical assumptions, and solving a system of equations. The assumptions may be stated in common language thus:

• Computer Science (CS) community size ~ improvement in computing technology
• CS community size ~ rate of log of computing efficiency
• Fixed amount of money is invested in AI every year

These three assumptions are shown to produce a (theoretically) infinite improvement in a short time, as they depict a positive feedback loop that accelerates the already exponential curve of Moore’s law.  Up to now, this is the same singularity that many H+ readers are all too well familiar with.

To remind Moore’s Law, well, it is: “number of transistors placed on a microprocessor at a fixed cost doubles every two years” as originally conceived.  However, Moore’s law has tapered off; the number of transistors unfortunately doubles in three years nowadays. Yet, a seemingly more fundamental law has emerged which relates to energy efficiency of computing. That is known as  Koomey’s Law (2), and some semiconductor companies like NVIDIA have even made future predictions based on this relation. Koomey instead observes that energy-efficiency of computing doubles every 18 month, by analyzing a trend (in log scale) that goes back to 1945.

Therefore, I updated the Infinity Point Hypothesis, using Koomey’s Law instead in two papers. In the first paper (3), I estimated human-level AI to be feasible by 2025, depending on Koomey’s Law. In the second, I combined this new projection with a worst case prediction of human brain computing speed. It is mostly straightforward to obtain this figure. The number of synapses in the adult neocortex is about $$1.64 \times 10^{14}$$ and the total number of synapses is less  than $$5 \times 10^{14}$$. Since the maximum bandwidth of a single synapse is estimated to be about 1500 bits/sec (i.e., when information is being transmitted at maximum rate), the total communication bandwidth of the parallel computer is at most $$2.5 \times 10^{17}$$ bits/sec, which roughly corresponds to 3.8 petaflop/sec computing speed. There are some finer details I am leaving out for the moment, but that is a quite good estimate of what would happen if your entire neocortex were saturated with thought, which is usually not the case according to fMRI scans. I then calculate the energy efficiency of the brain computer and it turns out to be 192 teraflop/sec.W, which is of course much better than current processors. However, a small, energy efficient microchip of today can achieve 72 gigaflop/sec.W, which is not meager at all.

When I thus extrapolate using Koomey’s trend in log scale, I predict that in 17 years, in 2030, computers will attain human-level energy efficiency of computing, in the worst case.

I then assume that R=1 in Solomonoff’s theory, that is to say, we invest an amount of money into artificial intelligence that will match the collective intelligence of CS community every year. For the computer technology of 2030, this is a negligible cost, as each CS researcher already will have a sufficiently powerful computer, and merely continuously running it would enable him to offload his research to a computer at 20W; the operational cost to world economy would be completely negligible. At this low rate of investment, massive acceleration to Koomey’s law will be observed, and according to theory in about 5 years (4.62 to be exact), infinity point will be reached.

That is, we should expect the infinity point, when we will approach physical limits of computation to the extent it is technologically possible, by 2035 latest, all else being equal. Naturally, I imagine there to arise new physical bottlenecks, and I would be glad to see a good objection to this calculation. It is entirely possible that an inordinate amount of physical and financial resources would be necessary for realizing the experiments of and manufacturing the hypothetical super-fast future computers, for instance.

Nevertheless, we live in interesting times.

Onwards to the future!

References:

(1) Ray Solomonoff, 1985 The Time Scale of Artificial Intelligence:
Reflections on Social Efects, Human Systems Management, Vol. 5, pp.
149-153, 1985.

(2) Koomey, J.G., Berard, S., Sanchez, M., Wong, H.: Implications of
historical trends in the electrical efficiency of computing. IEEE
Annals of the History of Computing 33, 2011.

(3) Eray Özkural: Diverse Consequences of Algorithmic Probability,
Solomonoff 85th Memorial Conference, Nov. 2011, Melbourne, Australia.

1. I´m sorry I didn’t realize the Commodore 64 was a super computer.

Oh that´s right, it wasn’t, and that image is totally misleading.

Nice try dipshit.

2. Sorry, the “t” on my keyboard is wearing out, please add the necessary ones so that the above post is more grammatically correct.

3. LOL this is such a simplified model of reality, it belongs in a 1950’s style sci fi pulp rag. I can already produce an intelligent machine that is as energy efficient as a human brain. It is a human brain, and there are billions around. Basically he assumption that is most flawed is that somehow a positive feedback in computer science will discontinuously disrupt the world (in good or bad ways). Why does an AI need to match human intelligence? Why not .8 times intelligence, or 1.2 times before this happens? Why do we think these AI’s will work together? Why won’t they go to war with each other, like my antivirus and malware already are in my computer? The equation of synaptic number and speed to intelligence is hugely flawed. Any evidence that Einstein had significantly more neurons or synapses than say some member of congress? Don’t get me wrong, I think that computers will continue to accelerate our technology, but this singularity is just as goofy as Y2K, the Mayan calendar, or the rapture. 2035 just happens to be far enough away that most of us will forget this prediction by he time it gets here.

4. Does it matter?

It only matters of this … “stuff”… will provide me with both radically enhanced experienced, increased comforts and extended lifespan. If it doesn’t, all this conjecture can go to hell.

Call me an “anti-randian” if you will, but in terms of enlightened self interest, that’s where I stand. I am here to consume and vote. Don’t get in my way, or they will be .. warfare.

5. Very nice article, thank you.

My concern is that estimating the speed of the brain on the basis of number of synapses and synaptic bandwith might be inadequate given some recent research suggesting additional computation occurring within neurons via a previously unknown mechanism involving dendritic excitability. Are you familiar with this and might this change your estimate of when infinity point will arrive? IMHO, we presently understand the brain too little to arrive at accurate speed and efficiency numbers. It makes no difference in terms of the big picture, but it might change the timeline to achieve human-level AI if it turns out the brain is better than we thought. Would love to hear your thoughts on this.

http://www.sciencedaily.com/releases/2013/10/131027185027.htm

• Yes, I am aware and it might change a bit, though the synaptic bandwidth there is based on sound experimental neuroscience research. So the communication bandwidth cannot be too different I think.

But could the “real” computation bandwidth be an order of magnitude faster? Yes, that is quite likely, much like our computers, the CPU is usually much faster than the network interface. However, the architecture of the brain is fine-grained, and that would make a difference. We still do not know precisely.

OTOH, the dendritic trees may be modeled as “internal” neural nets I think. With my logic here, this would increase the total communication bandwidth among simple transputer units.

6. It is nice to see more recognition of Moore’s Law breaking down. My main issue with predictions of these kind is we are reaching the limit of silicon and while there are many potential alternatives none have been proven to match it.

There are three huge assumptions one must make. First is that we will replace silicon efficiently enough that it will keep in time with this growth. Secondly, that what we replace it with will follow the same growth pattern. Finally that nothing else stops the process.

Another issue is that a computer that can match human level computing power will not spontaneously generate sentience. We must also develop the software to make use of it, while I see no reason we will not eventually do so it is still an assumption.

And of course Koomey’s law (I thank you for making me aware of this) has its own upper limit that at our current rate we’d reach in about 40 years. This also hints at the potential that once we hit the singularity this AI might just find itself hitting a wall shortly thereafter.

Personally I think we should wait until we enter the post-silicon era and evaluate the growth potential of its replacement before making predictions.