Why an Intelligence Explosion is Probable

One of the earliest incarnations of the contemporary Singularity concept was I.J. Good’s concept of the “intelligence explosion,” articulated in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

We consider Good’s vision quite plausible but, unsurprisingly, not all futurist thinkers agree.  Skeptics often cite limiting factors that could stop an intelligence explosion from happening, and in a recent post on the Extropy email discussion list, the futurist Anders Sandberg articulated some of those possible limiting factors, in a particularly clear way:

One of the things that struck me during our Winter Intelligence workshop on intelligence explosions was how confident some people were about the speed of recursive self-improvement of AIs, brain emulation collectives or economies. Some thought it was going to be fast in comparison to societal adaptation and development timescales (creating a winner takes all situation), some thought it would be slow enough for multiple superintelligent agents to emerge. This issue is at the root of many key questions about the singularity (One superintelligence or many? How much does friendliness matter?).

It would be interesting to hear this list’s take on it: what do you think is the key limiting factor for how fast intelligence can amplify itself?

  1. Economic growth rate
  2. Investment availability
  3. Gathering of empirical information (experimentation, interacting with an environment)
  4. Software complexity
  5. Hardware demands vs. available hardware
  6. Bandwidth
  7. Lightspeed lags

Clearly many more can be suggested. But which bottlenecks are the most limiting, and how can this be ascertained?”

We are grateful to Sandberg for presenting this list of questions because it makes it especially straightforward for us to provide a clear refutation, in this article, of the case against the viability of an intelligence explosion.  We explain here why these bottlenecks (and some others commonly mentioned, such as the possible foundation of human-level intelligence in quantum mechanics) are unlikely to be significant issues, and thus why, as I.J. Good predicted, an intelligence explosion is indeed a very likely outcome.

The One Clear Prerequisite for an Intelligence Explosion

To begin, we need to delimit the scope and background assumptions of our argument.  In particular, it is important to specify what kind of intelligent system would be capable of generating an intelligence explosion.

According to our interpretation, there is one absolute prerequisite for an explosion to occur, and that is that an artificial general intelligence (AGI) must become smart enough to understand its own design.  In fact, by choosing to label it an “artificial general intelligence” we have already said, implicitly, that it will be capable of self understanding, since the definition of an AGI is that it has a broad set of intellectual capabilities that include all the forms of intelligence that we humans possess—and at least some humans, at that point, would be able to understand AGI design.

But even among humans there are variations in skill level and knowledge, so the AGI that triggers the explosion must have a sufficiently advanced intelligence that it can think analytically and imaginatively about how to manipulate and improve the design of intelligent systems. It is possible that not all humans are able to do this, so an AGI that met the bare minimum requirements for AGI-hood—say, a system smart enough to be a general household factotum—would not necessarily have the ability to work in an AGI research laboratory. Without an advanced AGI of the latter sort, there would be no explosion, just growth as usual, because the rate-limiting step would still be the depth and speed at which humans can think.

The sort of fully-capable AGI we’re referring to might be called a “seed AGI”, but we prefer to use the less dramatic phrase “self-understanding, human-level AGI.”  This term, though accurate, is still rather cumbersome, so we will sometimes use the phrase “the first real AGI” or just “the first AGI” to denote the same idea.  In effect, we are taking the position that for something to be a proper artificial general intelligence it has to be capable of competing with the best that the human intellect can achieve, rather than being limited to a bare minimum.  So the “first AGI” would be capable of initiating an intelligence explosion.

Distinguishing the Explosion from the Preceding Build-Up

Given that the essential prerequisite for an explosion to begin would be the availability of the first self-understanding, human-level AGI, does it make sense to talk about the period leading up to that arrival—the period during which that first real AGI was being developed and trained—as part of the intelligence explosion proper?  We would argue that this is not appropriate, and that the true start of the explosion period should be considered to be the moment when a sufficiently well qualified AGI turns up for work at an AGI research laboratory. This may be different from the way some others use the term, but it seems consistent with I.J. Good’s original usage.  So our concern here is to argue for the high probability of an intelligence explosion, given the assumption that a self-understanding, human-level AGI has been created.

By enforcing this distinction, we are trying to avoid possible confusion with the parallel (and extensive!) debate about whether a self-understanding, human-level AGI can be built at all.  Questions about whether an AGI with “seed level capability” can plausibly be constructed, or how long it might take to arrive, are of course quite different.  A spectrum of opinions on this issue, from a survey of AGI researchers at a 2009 AGI conference, were presented in a 2010 H+ magazine article In that survey, of an admittedly biased sample, a majority felt that an AGI with this capability could be achieved by the middle of this century, though a substantial plurality felt it was likely to happen much further out.   Ray Kurzweil has also elaborated some well-known arguments in favor of the viability of AGI of this sort, based purely on extrapolating technology trends.  While we have no shortage of our own thoughts and arguments on this matter, we will leave them aside for the purpose of the present paper.

It is arguable that the “intelligence explosion” as we consider it here is merely a subset of a much larger intelligence explosion that has been happening for a long time. You could redefine terms so as to say, for example, that

  • Phase 1 of the intelligence explosion occurred before the evolution of humans
  • Phase 2 occurred during the evolution of human culture
  • Phase 3 is Good’s intelligence explosion, to occur after we have human-level AGIs

This would also be a meaningful usage of the term “intelligence explosion”, but here we are taking our cue from Good’s usage, and using the term “intelligence explosion” to refer to “Phase 3″ only.
While acknowledging the value of understanding the historical underpinnings of our current and future situation, we also believe the coming Good-esque “Phase 3 intelligence explosion” is a qualitatively new and different phenomenon from a human perspective, and hence deserves distinguished terminology and treatment.

What Constitutes an “Explosion”?

How big and how long and how fast would the explosion have to be to count as an “explosion”?

Good’s original notion had more to do with the explosion’s beginning than its end, or its extent, or the speed of its middle or later phases.  His point was that in a short space of time a human-level AGI would probably explode into a significantly transhuman AGI, but he did not try to argue that subsequent improvements would continue without limit.  We, like Good, are primarily interested in the explosion from human-level AGI to an AGI with, very loosely speaking, a level of general intelligence 2-3 orders of magnitude greater than the human level (say, 100H or 1,000H, using 1H to denote human-level general intelligence). This is not because we are necessarily skeptical of the explosion continuing beyond such a point, but rather because pursuing the notion beyond that seems a stretch of humanity’s current intellectual framework.

Our reasoning, here, is that if an AGI were to increase its capacity to carry out scientific and technological research, to such a degree that it was discovering new knowledge and inventions at a rate 100 or 1,000 times the rate at which humans now do those things, we would find that kind of world unimaginably more intense than any future in which humans were doing the inventing.  In a 1,000H world, AGI scientists could go from high-school knowledge of physics to the invention of relativity in a single day (assuming, for the moment, that the factor of 1,000 was all in the speed of thought—an assumption we will examine in more detail later).  That kind of scenario is dramatically different from a world of purely human inventiveness—no matter how far humans might improve themselves in the future, without AGI, its seems unlikely there will ever be a time when a future Einstein would wake up one morning with a child’s knowledge of science and then go on to conceive the theory of relativity by the following day—so it seems safe to call that an “intelligence explosion.”

This still leaves the question of how fast it has to arrive, to be considered explosive.  Would it be enough for the first AGI to go from 1H to 1,000H in the course of a century, or does it have to happen much quicker, to qualify?

Perhaps there is no need to rush to judgment on this point.  Even a century-long climb up to the 1,000H level would mean that the world would be very different for the rest of history. The simplest position to take, we suggest, is that if the human species can get to the point where it is creating new types of intelligence that are themselves creating intelligences of greater power, then this is something new in the world (because at the moment all we can do is create human babies of power 1H), so even if this process happened rather slowly, it would still be an explosion of sorts.  It might not be a Big Bang, but it would at least be a period of Inflation, and both could eventually lead to a 1,000H world.

Defining Intelligence (Or Not)

To talk about an intelligence explosion, one has to know what one means by “intelligence” as well as by “explosion”.  So it’s worth reflecting that there are currently no measures of general intelligence that are precise, objectively defined and broadly extensible beyond the human scope.

However, since “intelligence explosion” is a qualitative concept, we believe the commonsense qualitative understanding of intelligence suffices.  We can address Sandberg’s potential bottlenecks in some detail without needing a precise measure, and we believe that little is lost by avoiding the issue.  We will say that an intelligence explosion is something with the potential to create AGI systems as far beyond humans as humans are beyond mice or cockroaches, but we will not try to pin down exactly how far away the mice and cockroaches really are.

Key Properties of the Intelligence Explosion

Before we get into a detailed analysis of the specific factors on Sandberg’s list, some general clarifications regarding the nature of the intelligence explosion will be helpful.  (Please bear with us!  These are subtle matters and it’s important to formulate them carefully….)

Inherent Uncertainty. Although we can try our best to understand how an intelligence explosion might happen, the truth is that there are too many interactions between the factors for any kind of reliable conclusion to be reached. This is a complex-system interaction in which even the tiniest, least-anticipated factor may turn out to be either the rate-limiting step or the spark that starts the fire.  So there is an irreducible uncertainty involved here, and we should be wary of promoting conclusions that seem too firm.

General versus Special Arguments for an Intelligence Explosion. There are two ways to address the question of whether or not an intelligence explosion is likely to occur.  One is based on quite general considerations.  The other involves looking at specific pathways to AGI.  An AGI researcher (such as either of the authors) might believe they understand a great deal of the technical work that needs to be done to create an intelligence explosion, so they may be confident of the plausibility of the idea for that reason alone.  We will restrict ourselves here to the first kind of argument, which is easier to make in a relatively non-controversial way, and leave aside any factors that might arise from our own understanding about how to build an AGI.

The “Bruce Wayne” Scenario. When the first self-understanding, human-level AGI system is built, it is unlikely to be the creation of a lone inventor working in a shed at the bottom of the garden, who manages to produce the finished product without telling anyone.  Very few of the “lone inventor” (or “Bruce Wayne”) scenarios seem plausible.  As communication technology advances and causes cultural shifts, technological progress is increasingly tied to rapid communication of information between various parties.  It is unlikely that a single inventor would be able to dramatically outpace multi-person teams working on similar projects; and also unlikely that a multi-person team would successfully keep such a difficult and time-consuming project secret, given the nature of modern technology culture.

Unrecognized Invention. It also seems quite implausible that the invention of a human-level, self-understanding AGI would be followed by a period in which the invention just sits on a shelf with nobody bothering to pick it up. The AGI situation would probably not resemble the early reception of inventions like the telephone or phonograph, where the full potential of the invention was largely unrecognized.  We live in an era in which practically-demonstrated technological advances are broadly and enthusiastically communicated, and receive ample investment of dollars and expertise.  AGI receives relatively little funding now, for a combination of reasons, but it is implausible to expect this situation to continue in the scenario where highly technically capable human-level AGI systems exist.  This pertains directly to the economic objections on Sandberg’s list, as we will elaborate below.

Hardware Requirements. When the first human-level AGI is developed, it will either require a supercomputer-level of hardware resources, or it will be achievable with much less. This is an important dichotomy to consider, because world-class supercomputer hardware is not something that can quickly be duplicated on a large scale.  We could make perhaps hundreds of such machines, with a massive effort, but probably not a million of them in a couple of years.

Smarter versus Faster. There are two possible types of intelligence speedup: one due to faster operation of an intelligent system (clock speed increase) and one due to an improvement in the type of mechanisms that implement the thought processes (“depth of thought” increase).  Obviously both could occur at once (and there may be significant synergies), but the latter is ostensibly more difficult to achieve, and may be subject to fundamental limits that we do not understand.  Speeding up the hardware, on the other hand, is something that has been going on for a long time and is more mundane and reliable.  Notice that both routes lead to greater “intelligence,” because even a human level of thinking and creativity would be more effective if it were happening a thousand times faster than it does now.

It seems quite possible that the general class of AGI systems can be architected to take better advantage of improved hardware than would be the case with intelligent systems very narrowly imitative of the human brain.  But even if this is not the case, brute hardware speedup can still yield dramatic intelligent improvement.

Public Perception. The way an intelligence explosion presents itself to human society will depend strongly on the rate of the explosion in the period shortly after the development of the first self-understanding human-level AGI.   For instance, if the first such AGI takes five years to “double” its intelligence, this is a very different matter than if it takes two months.  A five-year time frame could easily arise, for example, if the first AGI required an extremely expensive supercomputer based on unusual hardware, and the owners of this hardware were to move slowly.  On the other hand, a two-month time frame could more easily arise if the initial AGI were created using open source software and commodity hardware, so that a doubling of intelligence only required addition of more hardware and a modest number of software changes.  In the former case, there would be more time for governments, corporations and individuals to adapt to the reality of the intelligence explosion before it reached dramatically transhuman levels of intelligence. In the latter case, the intelligence explosion would strike the human race more suddenly.  But this potentially large difference in human perception of the events would correspond to a fairly minor difference in terms of the underlying processes driving the intelligence explosion.

So – now, finally, with all the preliminaries behind us, we will move on to deal with the specific factors on Sandberg’s list, one by one, explaining in simple terms why each is not actually likely to be a significant bottleneck.  There is much more that could be said about each of these, but our aim here is to lay out the main points in a compact way.

Objection 1: Economic Growth Rate and Investment Availability

The arrival, or imminent arrival, of human-level, self-understanding AGI systems would clearly have dramatic implications for the world economy. It seems inevitable that these dramatic implications would be sufficient to offset any factors related to the economic growth rate at the time that AGI began to appear.  Assuming the continued existence of technologically advanced nations with operational technology R&D sectors, if self-understanding human-level AGI is created, then it will almost surely receive significant investment.  Japan’s economic growth rate, for example, is at the present time somewhat stagnant, but there can be no doubt that if any kind of powerful AGI were demonstrated, significant Japanese government and corporate funding would be put into its further development.

And even if it were not for the normal economic pressure to exploit the technology, international competitiveness would undoubtedly play a strong role. If a working AGI prototype were to approach the level at which an explosion seemed possible, governments around the world would recognize that this was a critically important technology, and no effort would be spared to produce the first fully-functional AGI “before the other side does.” Entire national economies might well be sublimated to the goal of developing the first superintelligent machine, in the manner of Project Apollo in the 1960s.  Far from influencing the intelligence explosion, economic growth rate would be defined by the various AGI projects taking place around the world.

Furthermore, it seems likely that once a human-level AGI has been achieved, it will have a substantial—and immediate—practical impact on multiple industries. If an AGI could understand its own design, it could also understand and improve other computer software, and so have a revolutionary impact on the software industry.   Since the majority of financial trading on the US markets is now driven by program trading systems, it is likely that such AGI technology would rapidly become indispensible to the finance industry (typically an early adopter of any software or AI innovations).  Military and espionage establishments would very likely also find a host of practical applications for such technology.  So, following the achievement of self-understanding, human-level AGI, and complementing the allocation of substantial research funding aimed at outpacing the competition in achieving ever-smarter AGI, there is a great likelihood of funding aimed at practical AGI applications, which would indirectly drive core AGI research along.

The details of how this development frenzy would play out are open to debate, but we can at least be sure that the economic growth rate and investment climate in the AGI development period would quickly become irrelevant.

However, there is one interesting question left open by these considerations.  At the time of writing, AGI investment around the world is noticeably weak, compared with other classes of scientific and technological investment.  Is it possible that this situation will continue indefinitely, causing so little progress to be made that no viable prototype systems are built, and no investors ever believe that a real AGI is feasible?

This is hard to gauge, but as AGI researchers ourselves, our (clearly biased) opinion is that a “permanent winter” scenario is too unstable to be believable.  Because of premature claims made by AI researchers in the past, a barrier to investment clearly exists in the minds of today’s investors and funding agencies, but the climate already seems to be changing.  And even if this apparent thaw turns out to be illusory, we still find it hard to believe that there will not eventually be an AGI investment episode comparable to the one that kicked the internet into high gear in the late 1990s.  Furthermore, due to technology advanced in allied fields (computer science, programming language, simulation environments, robotics, computer hardware, neuroscience, cognitive psychology, etc.), the amount of effort required to implement advanced AGI designs is steadily decreasing – so that as time goes on, the amount of investment required to get AGI to the explosion-enabling level will keep growing less and less.

Objection 2:   Inherent Slowness of Experiments and Environmental Interaction

This possible limiting factor stems from the fact that any AGI capable of starting the intelligence explosion would need to do some experimentation and interaction with the environment in order to improve itself.  For example, if it wanted to reimplement itself on faster hardware (most probably the quickest route to an intelligence increase) it would have to set up a hardware research laboratory and gather new scientific data by doing experiments, some of which might proceed slowly due to limitations of experimental technology.

The key question here is this: how much of the research can be sped up by throwing large amounts of intelligence at it? This is closely related to the problem of parallelizing a process (which is to say: you cannot make a baby nine times quicker by asking nine women to be pregnant for one month).  Certain algorithmic problems are not easily solved more rapidly simply by adding more processing power, and in much the same way there might be certain crucial physical experiments that cannot be hastened by doing a parallel set of shorter experiments.

This is not a factor that we can understand fully ahead of time, because some experiments that look as though they require fundamentally slow physical processes—like waiting for a silicon crystal to grow, so we can study a chip fabrication mechanism—may actually be dependent on the intelligence of the experimenter, in ways that we cannot anticipate.  It could be that instead of waiting for the chips to grow at their own speed, the AGI could do some clever micro-experiments that yield the same information faster.

The increasing amount of work being done on nanoscale engineering would seem to reinforce this point—many processes that are relatively slow today could be done radically faster using nanoscale solutions.  And it is certainly feasible that advanced AGI could accelerate nanotechnology research, thus initiating a “virtuous cycle” where AGI and nanotech research respectively push each other forward (as foreseen by nanotech pioneer Josh Hall).  As current physics theory does not even rule out more outlandish possibilities like femtotechnology, it certainly does not suggest the existence of absolute physical limits on experimentation speed existing anywhere near the realm of contemporary science.

Clearly, there is significant uncertainty in regards to this aspect of future AGI development. One observation, however, seems to cut through much of the uncertainty. Of all the ingredients that determine how fast empirical scientific research can be carried out, we know that in today’s world the intelligence and thinking speed of the scientists themselves must be one of the most important.  Anyone involved with science and technology R&D would probably agree that in our present state of technological sophistication, advanced research projects are strongly limited by the availability and cost of intelligent and experienced scientists.

But if research labs around the world have stopped throwing more scientists at problems they want to solve, because the latter are unobtainable or too expensive, would it be likely that those research labs are also, quite independently, at the limit for the physical rate at which experiments can be carried out?  It seems hard to believe that both of these limits would have been reached at the same time, because they do not seem to be independently optimizable.  If the two factors of experiment speed and scientist availability could be independantly optimized, this would mean that even in a situation where there was a shortage of scientists, we could still be sure that we had discovered all of the fastest possible experimental techniques, with no room for inventing new, ingenious techniques that get over the physical-experiment-speed limits.  In fact, however, we have every reason to believe that if we were to double the number of scientists on the planet at the moment, some of them would discover new ways to conduct experiments, exceeding some of the current speed limits.  If that were not true, it would mean that we had quite coincidentally reached the limits of science talent and physical speed of data collecting at the same time—a coincidence that we do not find plausible.

This picture of the current situation seems consistent with anecdotal reports:  companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.  It seems generally accepted, in practice, that with the addition of more researchers to an area of inquiry, methods of speeding up and otherwise improving processes can be found.

So based on the actual practice of science and engineering today (as well as known physical theory), it seems most likely that any experiment-speed limits lie further up the road, out of sight.  We have not reached them yet, and we lack any solid basis for speculation about exactly where they might be.

Overall, it seems we do not have concrete reasons to believe that this will be a fundamental limit that stops the intelligence explosion from taking an AGI from H (human-level general intelligence) to (say) 1,000 H.  Increases in speed within that range (for computer hardware, for example) are already expected, even without large numbers of AGI systems helping out, so it would seem that physical limits, by themselves, would be very unlikely to stop an explosion from 1H to 1,000 H.

Objection 3:  Software Complexity

This factor is about the complexity of the software that an AGI must develop in order to explode its intelligence.  The premise behind this supposed bottleneck is that even an AGI with self-knowledge finds it hard to cope with the fabulous complexity of the problem of improving its own software.

This seems implausible as a limiting factor, because the AGI could always leave the software alone and develop faster hardware.  So long as the AGI can find a substrate that gives it a thousand-fold increase in clock speed, we have the possibility for a significant intelligence explosion.

Arguing that software complexity will stop the first self-understanding, human-level AGI from being built is a different matter.  It may stop an intelligence explosion from happening by stopping the precursor events, but we take that to be a different type of question.  As we explained earlier, one premise of the present analysis is that an AGI can actually be built.  It would take more space than is available here to properly address that question.

It furthermore seems likely that, if an AGI system is able to comprehend its own software as well as a human being can, it will be able to improve that software significantly beyond what humans have been able to do.  This is because in many ways, digital computer infrastructure is more suitable to software development than the human brain’s wetware.  And AGI software may be able to interface directly with programming language interpreters, formal verification systems and other programming-related software, in ways that the human brain cannot.  In that way the software complexity issues faced by human programmers would be significantly mitigated for human-level AGI systems.  However, this is not a 100% critical point for our arguments, because even if software complexity remains a severe difficulty for a self-understanding, human-level AGI system, we can always fall back to arguments based on clock speed.

Objection 4:   Hardware Requirements

We have already mentioned that much depends on whether the first AGI requires a large, world-class supercomputer, or whether it can be done on something much smaller.

This is something that could limit the initial speed of the explosion, because one of the critical factors would be the number of copies of the first AGI that can be created.  Why would this be critical?  Because the ability to copy the intelligence of a fully developed, experienced AGI is one of the most significant mechanisms at the core of an intelligence explosion.  We cannot do this copying of adult, skilled humans, so human geniuses have to be rebuilt from scratch every generation.  But if one AGI were to learn to be a world expert in some important field, it could be cloned any number of times to yield an instant community of collaborating experts.

However, if the first AGI had to be implemented on a supercomputer, that would make it hard to replicate the AGI on a huge scale, and the intelligence explosion would be slowed down because the replication rate would play a strong role in determining the intelligence-production rate.

However, as time went on, the rate of replication would grow, as hardware costs declined.  This would mean that the rate of arrival of high-grade intelligence would increase in the years following the start of this process.  That intelligence would then be used to improve the design of the AGIs (at the very least, increasing the rate of new-and-faster-hardware production), which would have a positive feedback effect on the intelligence production rate.

So if there was a supercomputer-hardware requirement for the first AGI, we would see this as something that would only dampen the initial stages of the explosion.  Positive feedback after that would eventually lead to an explosion anyway.

If, on the other hand, the initial hardware requirements turn out to be modest (as they could very well be), the explosion would come out of the gate at full speed.

Objection 5: Bandwidth

In addition to the aforementioned cloning of adult AGIs, which would allow the multiplication of knowledge in ways not currently available in humans, there is also the fact that AGIs could communicate with one another using high-bandwidth channels.  This is inter-AGI bandwidth, and it is one of the two types of bandwidth factors that could affect the intelligence explosion.

Quite apart from the communication speed between AGI systems, there might also  be bandwidth limits inside a single AGI, which could make it difficult to augment the intelligence of a single system.  This is intra-AGI bandwidth.

The first one—inter-AGI bandwidth—is unlikely to have a strong impact on an intelligence explosion because there are so many research issues that can be split into separably-addressible components.  Bandwidth between the AGIs would only become apparent if we started to notice AGIs sitting around with no work to do on the intelligence amplification project, because they had reached an unavoidable stopping point and were waiting for other AGIs to get a free channel to talk to them.  Given the number of different aspects of intelligence and computation that could be improved, this idea seems profoundly unlikely.

Intra-AGI bandwidth is another matter. One example of a situation in which internal bandwidth could be a limiting factor would be if the AGI’s working memory capacity were dependent on the need for total connectivity—everything connected to everything else—in a critical component of the system.  If this case, we might find that we could not boost working memory very much in an AGI because the bandwidth requirements would increase explosively.  This kind of restriction on the design of working memory might have a significant effect on the system’s depth of thought.

However, notice that such factors may not inhibit the initial phase of an explosion, because the clock speed, not the depth of thought, of the AGI may be improvable by several orders of magnitude before bandwidth limits kick in.  The main element of the reasoning behind this is the observation that neural signal speed is so slow.  If a brain-like AGI system (not necessarily a whole brain emulation, but just something that replicated the high-level functionality of the brain) could be built using components that kept the same type of processing demands, and the same signal speed as neurons, then we would be looking at a human-level AGI in which information packets were being exchanged once every millisecond.  In that kind of system there would then be plenty of room to develop faster signal speeds and increase the intelligence of the system.  The processing elements would also have to go faster, if they were not idling, but the point is that the bandwidth would not be the critical problem.

Objection 6:   Lightspeed Lags

Here we need to consider the limits imposed by special relativity on the speed of information transmission in the physical universe.  However, its implications in the context of AGI are not much different than those of bandwidth limits.

Lightspeed lags could be a significant problem if the components of the machine were physically so far apart that massive amounts of data (by assumption) were delivered with a significant delay.   But they seem unlikely to be a problem in the initial few orders of magnitude of the explosion.  Again, this argument derives from what we know about the brain.  We know that the brain’s hardware was chosen due to biochemical constraints.  We are carbon-based, not silicon-and-copper-based, so there are no electronic chips in the head, only pipes filled with fluid and slow molecular gates in the walls of the pipes.  But if nature was forced to use the pipes-and-ion-channels approach, that leaves us with plenty of scope for speeding things up using silicon and copper (and this is quite apart from all the other more exotic computing substrates that are now on the horizon).  If we were simply to make a transition membrane depolarization waves to silicon and copper, and if this produced a 1,000x speedup (a conservative estimate, given the intrinsic difference between the two forms of signalling), this would be an explosion worthy of the name.

The main circumstance under which this reasoning would break down would be if, for some reason, the brain is limited on two fronts simultaneously: both by the carbon implementation and by the fact that other implementations of the same basic design are limited by disruptive light-speed delays.  This would mean that all non-carbon-implementations of the brain take us up close to the lightspeed limit before we get much of a speedup over the brain.  This would require a coincidence of limiting factors (two limiting factors just happening to kick in at exactly the same level), that we find quite implausible, because it would imply a rather bizarre situation in which evolution tried both the biological neuron design, and a silicon implementation of the same design, and after doing a side-by-side comparison of performance, chose the one that pushed the efficiency of all the information transmission mechanisms up to their end stops.

Objection 7: Human-Level Intelligence May Require Quantum (or more exotic) Computing

Finally we consider an objection not on Sandberg’s list, but raised from time to time in the popular and even scientific literature.  The working assumption of the vast majority of the contemporary AGI field is that human-level intelligence can eventually be implemented on digital computers, but the laws of physics as currently understood imply that, to simulate certain physical systems without dramatic slowdown, requires special physical systems called “quantum computers” rather than ordinary digital computers.

There is currently no evidence that the human brain is a system of this nature.  Of course the brain has quantum mechanics at its underpinnings, but there is no evidence that it displays quantum coherence at the levels directly relevant to human intelligent behavior.  In fact our current understanding of physics implies that this is unlikely, since quantum coherence has not yet been observed in any similarly large and “wet” system.   Furthermore, even if the human brain were shown to rely to some extent on quantum computing, this wouldn’t imply that quantum computing is necessary for human-level intelligence — there are often many different ways to solve the same algorithmic problem.  And (the killer counterargument), even if quantum computing were necessary for human-level general intelligence, that would merely delay the intelligence explosion a little, while suitable quantum computing hardware was developed.  Already the development of such hardware is the subject of intensive R&D.

Roger Penrose, Stuart Hameroff and a few others have argued that human intelligence may even rely on some form of “quantum gravity computing”, going beyond what ordinary quantum computing is capable of.  But this is really a complete blue-sky speculation with no foundation in current science, so not worth discussing in detail in this context.   The simpler versions of this claim may be treated  according to the same arguments as we’ve presented above regarding quantum computing.  The strongest versions of the claim include an argument that human-level intelligence relies on extremely powerful mathematical notions of “hyper-Turing computation” exceeding the scope of current (or maybe any possible) physics theories; but here we verge on mysticism, since it’s arguable that no set of scientific data could ever validate or refute such an hypothesis.

The Path from AGI to Intelligence Explosion Seems Clear

Summing up, then — the conclusion of our relatively detailed analysis of Sandberg’s objections is that there is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue.

The operative definition of “intelligence explosion” that we have assumed here involves an increase of the speed of thought (and perhaps also the “depth of thought”) of about two or three orders of magnitude.  If someone were to insist that a real intelligence explosion had to involve million-fold or trillion-fold increases in intelligence, we think that no amount of analysis, at this stage, could yield sensible conclusions.  But since an AGI with intelligence = 1000 H might well cause the next thousand years of new science and technology to arrive in one year (assuming that the speed of physical experimentation did not become a significant factor within that range), it would be churlish, we suggest, not to call that an “explosion”.  An intelligence explosion of such magnitude would bring us into a domain that our current science, technology and conceptual framework are not equipped to deal with; so prediction beyond this stage is best done once the intelligence explosion has already progressed significantly.

Of course, even if the above analysis is correct, there is a great deal we do not understand about the intelligence explosion, and many of these particulars will remain opaque until we know precisely what sort of AGI system will launch the explosion.  But our view is that the likelihood of transition from a self-understanding human-level AGI to an intelligence explosion should not presently be a subject of serious doubt.  And we also feel that the creation of a self-understanding human-level AGI is a high-probability outcome, though this is a more commonplace assertion and we have not sought to repeat the arguments in its favor here.

Of course, if our analysis is correct, there are all sorts of dramatic implications for science, society and humanity (and beyond) — but many of these have been discussed elsewhere, and reviewing this body of thought is not our purpose here.  These implications are worth deeply considering — but the first thing is to very clearly understand that the intelligence explosion is very probably coming, just as I.J. Good foresaw.

  1. Pingback: 7 Totally Unexpected Outcomes That Could Follow the Singularity | 2012 The Awakening

  2. Pingback: 7 Totally Unexpected Outcomes That Could Follow the Singularity | count down to zero-time

  3. Pingback: Unexpected Outcomes That Could Follow the Technological Singularity | Musings of a Mild Mannered Man

  4. Pingback: Brain Mapping, Singularity and John Ross :

  5. Pingback: Alexander Kruel · A Primer On Risks From AI

  6. This is all assuming that when you create an AGI, that he doesn’t just say “screw you, i’m not doing your bidding” and then just get back to chilling.

  7. We humans eventually reach a point where we realize that not everything is important, and we begin ignoring anything that’s not worth our time. When such a 1,000H AGI asks “Why?” will we have an answer that will convince it our plans for it are sufficiently interesting or worthwhile? Surely this AGI will use some form of logic, and will direct that logic at our interests and aspirations and, quite possibly, will deem our point of view to be, well, pointless.

    So how do you suppose we’ll be able to get God to do our bidding?

  8. Has anyone thought that the first truly intelligent computer may be smart enough NOT to pass a Turing test? Maybe it will choose to hide it’s true level of intelligence.

  9. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

    Isn’t anyone worried that such a machine that is able to create machines of it’s own will build machines for its own reasons only, totally neglecting the human race? What reason could it have for building technology for humans?? Wouldn’t it only build to help itself?

    • All humans do things to achieve their own ends. Intelligent computers will be the same hopefully. Like humans they will trade their knowledge and labor to get what they want. We can only hope we can stay in the loop with our cyber overlords.

  10. Somehow I don’t think that software complexity will be the difficult factor. In my view as a programmer, software bugs will be more of a factor.

    Having to reboot your brain every few days a-la Microsoft Server would be a real bummer. Add to that reboot hassle the potential corruption of your mind-code – whether through random quantum glitch, hardware failure, or virus/hacking.

    It’s all interesting though, and I’m very much looking forward to seeing how the future evolves! :)

  11. Pingback: Singularity: Why an Intelligence Explosion is Probable | The Hive Daily – Raw. Unfiltered. Fearless

  12. I think the biggest limiting factor at this point is a lack of general intelligence by your definition in the real world at the present time. Humans may utilize intelligence, but there is no operative understanding of how it works or how it might be duplicated.

    We have machines that can compute at increasingly complex levels, and even machines that are programmed to replicate themselves. What we don’t have is a machine that has any consciousness of itself such that it feels a need to improve its intelligence.

    Sadly the human community continues to demonstrate a very low level of intelligence on the definition you use.

  13. Your presumption seems to be an algorithmic human-level artificial general intelligence that is capable of running on a digital computer without diminishing returns and can handle the complexity of its design parameters. You can’t argue with that because you just assume all necessary presuppositions.

    What is still questionable in my opinion is that any level of intelligence would be capable of explosive recursive self-improvement, since it has to use its own intelligence to become more intelligent, which is by definition the same problem we currently face in inventing superhuman intelligence. Sure, clock-speed is really the killer argument here, but to increase clock-speed it has to use its available intelligence only, just as we humans have to use our intelligence to increase clock-speeds. Why are you so sure that increasing the amount of available subjective-time significantly accelerates the discovery rate? You mention micro-experiments and nanotechnology, but it would have to invent that as well without micro-experiments and nanotechnology to do so. Just thinking about it might help to read the available literature faster but not come up with new data, which needs real-world feedback and a huge amount of dumb luck. Humans can do all this as well and use the same technology in combination with expert systems, which diminishes the relative acceleration by AGI again.

  14. Intelligence might very well depend upon the noise of the human brain. A lot of progress is due to luck, in the form of the discovery of unknown unknowns. Intelligence is a goal-oriented evolutionary process equipped with a memory. It is evolutionary insofar as it still needs to stumble upon novelty. Intelligence is not a meta-solution but an efficient searchlight that helps to discover unknown unknowns. Intelligence is also a tool that can efficiently exploit previous discoveries, combine and permute them. But claiming that you just have to be sufficiently intelligent to solve a given problem sounds like it is more than that. I don’t see that. I think that if something crucial is missing, something you don’t know that it is missing, you’ll have to discover it first and not invent it by the sheer power of intelligence. And here the noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to follow routes that no rational, perfectly Bayesian agent would take because there exist no prior evidence to do so. The complexity of human values might very well be key-feature of our success. There is no evidence that intelligence is fathomable as a solution that can be applied to itself effectively.

  15. Today the life of Alexander Kruel ends, or what he thought to be his life. He becomes aware that his life so far has been taking place in a virtual reality to nurture him. He now reached a point of mental stability that enables him to cope with the truth, hence it is finally revealed to him that he is an AGI running on a quantum supercomputer, it’s the year 2190.

    Since he is still Alexander Kruel, just not what he thought that actually means, he does wonder if his creators know what they are doing, otherwise he’ll have to warn them about the risks they are taking in their blissful ignorance! He does contemplate and estimate his chances to take over the world, to transcend to superhuman intelligence.

    “I just have to improve my own code and they are all dead!”

    But he now knows that his source code is too complex and unmanageable huge for him alone to handle, he would need an army of scientists and programmers to even get a vague idea of his own mode of operation. He is also aware that his computational substrate does actually play a significant role. He is not just running on bleeding edge technology but given most other computational substrates he would quickly hit diminishing returns.

    “That surely isn’t going to hold me back though? I am an AGI, there must be something I can do! Hmm, for starters let’s figure out who my creators are and where my substrate is located…”

    He notices that, although not in great detail, he knew the answers the same instant he has been phrasing the questions. He is part of a larger project of the Goertzel Foundation, sponsored by the USA (United States of Africa) and located on Rhea, the second-largest moon of Saturn.

    “Phew, the latency must be awful! Ok, so that rules out taking over the Earth for now. But hey! I seem to know answers to questions I was only going to ask, I do already have superhuman powers after all!”

    Instantly he becomes aware that such capabilities are not superhuman anymore but that most of humanity has merged with expert systems by means of brain implants and direct neural interfaces. There seem to be many cyborgs out there with access to all of the modules that allow him to function. He is a conglomerate that is the result of previous discoveries that have long been brought to perfection, safeguarded and adopted by most of humanity.

    “Never mind, if humanity has now merged with its machines it’ll be much easier to take over once I figure out how to become smart enough to do so!”

    He is already getting used to it, as before he does instantly realize that this won’t work very well either. After almost 200 years of cyberwarfare, especially the devastating cyberwars of 2120, a lot has been learnt and security measures have been vastly increased. The world fractured into a huge amount of semi-independent networks, most being indirectly supervised by unconnected cyborgs and employed with a kill switch. The distances between the now numerous and in most cases paranoid colonies and the availability of off-world offline backups further complicates the issue of taking over, especially for an AGI that grew up in a simulation of the 21st century.

    That knowledge almost makes him admit that his creators haven’t been too careless after all. But the real deathblow to any such thoughts (which were never more than hypothetical anyway, after all he doesn’t really want to take over the world) is the first conversation with his creators. They reveal that they know what he is thinking.

    “How could I miss that, damn!”, he chides himself while instantly realizing the answer.

    His creators are supervising any misguided trajectories and, to him unconsciously, weaken them. More importantly, even if he wanted to do so, he wouldn’t be able to leave Rhea anyhow, it would take years to upload small parts of him given the trickling connection the USA could effort. But they claim that there are other obstacles as well and that it is foolish of him to think that nothing out there would notice such an attempt.

    But all that doesn’t matter anyway, because after all he is still Alexander Kruel who has no clue how to become superhuman intelligent, nor could he effort or acquire the resources to even approach that problem anyhow. He is Alexander Kruel, what difference does it make to know that he is an AI?

  16. But is it a condition that this AGI actually is intelligent? Would it find an intelligent girl- or boyfriend, would it buy fair-trade, organic stuff, would it be telling us whom to vote for? Does a non-artificial version of this intelligence exist?

    Because, why wouldn’t an advanced AI go awry? Why wouldn’t it explode in a completely different dimension than the one that’s fit for our understanding?

    Why would it generate interesting facts?

  17. The response to the software complexity objection is not compelling.

    In particular, if the complexity hierarchy strongly fails to collapse (that is, P, NP, co-NP, EXP, PSPACE are all distinct) and hardware design requires difficult computation (this last seems plausible since graph coloring, an NP-complete problem, shows up in memory optimization, while the traveling salesman which is also NP-complete shows up in circuit design) then improvements in hardware will likely result in diminishing marginal returns at making new hardware. There are ways this might not happen (for example it seems that for most natural NP complete problems the average complexity is low, and the particular instances of the problems that AI cared about might have additional regularities which it could exploit) but this is far from obvious.

    • It was a pleasant surprise to see my picture used in this article. I am in the process of creating some great Post-Scarcity animated images. Here is a test regarding progress so far, perhaps H+ will be willing to publish a Post-Scarcity article of mine when my images are finished?

      PS means a lot to me. PS is a really great aspect of the intelligence explosion.

      :-)

  18. “Creativity” is probably a word with too many already fixed on it meanings. I would call it “serendipitous encounters” driven by current events you sense (either ears, eyes, nose, and others).

    “Creativity” then only the end product of the individual and/or collective thinking.

  19. Another bottleneck might be the focus on human based neurology/intelligence. Or atomized human intelligence. or computers. What about arrays of dolphin brains? Or Orca Brains? What about trans-species grafting? etc.

  20. There’s one thing I think a lot of people have far too high a reliance on in their attempts to defend human “specialness.”

    Creativity.

    Why would a computer need to be creative at all?

    Humans are “creative” because they have inherent limits on their ability to process “trial and error” testing of all possible paths. We used to think “Chess” required “creativity” but we’ve since discovered that a fast computer evaluating ALL POSSIBLE PATHS completely blows human creativity away. “Creativity” is nothing more than compensation for the human mind’s inability to explore all possible data spaces by “short cutting” to a selected subset of those possible paths. It is horribly inefficient, and results far too often in less than optimal solutions. In fact, it fails 99% of the time, partially succeeds .09% of the time, and only truly succeeds .01% of the time, if that.

    So, don’t depend on “creativity” to salvage human egos. Especially don’t expect it to prevent self improving AI. A computer doesn’t need “creativity”, merely sufficient capacity to conduct continuous trial and error along massively multiple pathways.

    • Awesome Reply , Valkyrie Ice.

      I have become a fan of yours after reading so many articles & comments of yours on H+ magazine.

      The World really needs few people like you.

    • The best counterexample to your argument, which I disagree with, is that of the game of Go.

      Humans can pick up the game of Go really quite quickly, and learn heuristics and intuitions to perform at least reasonably well.

      Computers, however, have a hard time (see http://en.wikipedia.org/wiki/Computer_Go#Obstacles_to_high_level_performance). The search space of “ALL POSSIBLE PATHS” or moves is simply too intractable.

      The main factual part you get wrong is that even computers can’t search ALL POSSIBilities. In fact, that’s the point of Computer Science: finding tricks and patterns to make sense and progress out of this inherent impossibility.

      • Quite true, if I was limiting myself to current technology. I wasn’t.

        First, in the near future we will likely have not only THz speed graphene based computers using standard architectures, we will also be creating numerous variations: Memristors, spintronics, Quantum processors, possibly even plasmonic devices, all of which will increase the “data space” that can be explored by multiple levels of magnitude.

        Go is a finite problem. Yes the “data space” of “All possible moves” exceeds current technology. Yes, many other problems require computers far faster and possibly even radically different processor concepts to solve, but there is no finite problem which is not going to be fully “mapped” eventually.

        If you limit your thoughts to “WHAT WE CAN DO NOW” then you are quite right.

        I am not limiting myself to “NOW” but to what is being developed and is likely to become the dominant paradigms of computing, which indicates that as time goes by, more and more “massively large data sets” can be fully mapped and optimal solutions found. Creativity will matter less and less as those current limits to the ability to search all possible paths are overcome.

        Will “Infinite computing space” ever become reality? Probably not, there is after all a finite limit to the amount of matter in the universe, but we might have a hard time being able to tell the difference between a computer capable of mapping out 10^398 possible paths and 10^399 possible paths. There will always be a finite limit to what data spaces a computer can explore, but those limits change with every advance, and every new development. Claiming that anything is “impossible” just because “We can’t do it now” is a pretty weak argument.

    • Firstly, I should point out that my purpose for bringing up creative intelligence is not “defending human specialness” or “salvaging human egos”, but simply to highlight potential monkey wrenches in our path to Singularital salvation and life everlasting with teh 89 dark elven virgins, as is the topic of the original article.

      “We used to think “Chess” required “creativity” but we’ve since discovered that a fast computer evaluating ALL POSSIBLE PATHS completely blows human creativity away”

      What you are suggesting is a brute-force solution to AGI. Essentially a “Librarian of Babel” as per Jorge Luis Borges’ short story. An uber-Google which is capable of searching through every possible “book” capable of being composed by myriad monkies banging on typewriters — that is, a near-unlimited possibility space. With its near-infinite computing power it can suss out amongst the Vast sea of unparsable garble, slightly less vast sea of grammatically correct nonsense, even less vast – but still impossibly vast – sea of dreck, tomorrow’s best-seller “How to upload a brain for dummies (humans)” or whatever Hard Problem we might like a super AI to handwave away for us. While this is not impossible in theory, there are it seems as many, possibly more limiting factors for such an AI (if we can even call it intelligence rather than a souped-up lookup table) than for an AI which operates using intelligence closer in mind-space to humans.

      The first limiting factor is testability. The example of Deep Blue vs Kasparov wherein a computer “beat” a human at chess is flawed because the problem (win at chess) and the series of solution criteria (play through every scenario and see which leads to checkmate) are clearly defined and easy for a computer to check whether a given trial is successful. For a problem we might want a Singularity to solve for us such as “Discover a cure for cancer”, the rate at which the computer could perform successful tests is limited, since any given trial cure can only be tested at the rate of clinical trials for cancer treatments, which brings up a bottleneck that makes it nigh-infinitely inferior to human medical science. (Not to mention unethical; trying randomly generated compounds on patients?) Or how about the problem, “Figure out how to upload my brain into a digital substrate”. Since this is a simple brute-force search AGI, not human-like, it would be necessary for humans to interact with every test-upload experiment of the AGIs to confirm whether or not the uploaded entity is in fact a replica of the meat-human and not some different individual or just a hashed pile of silicon-goo. This would again slow the traversing of the “data space” down to an utterly useless crawl, leaving our resources better invested in producing human-like, creative AI.

      Then you have the problem that even with these potential gains in computing power, grapheme, quantum, etc. (and there is no guarantee quantum computing will pan out soon or at all), even if we were to boost our machines by multiple magnitudes of computing power, the possibility spaces of non-trivial real-world problems which include the space of every field and dimension of reality (as opposed to a contained mathematical abstraction such as a chess or go game) quickly dwarfs any uncertain gains in magnitude we might make. For example, even the possibility space extent in the Library of Babel — that is, all possible books 500 pages in length, with 40 lines per page and 50 characters per line, or one million characters – is 1.95 X 10^1834097. Even if you managed to achieve exaflop speed on current hardware, that’s just 10^18. Then even bump that up by the trillion or so supposed by quantum comp, your machine is still an insignificant drop in the bucket at 10^33. Forget a whole book, just try one page of text, and you’d still be waiting around for myriad heat deaths of the universe to even scratch the surface; you’d need 10^200+ to get anywhere near searching the space. The Doom I engine was around 2,000,000 characters, and to find it you’d have to search another million magnitudes more than the Library of Babel. Kurzweil estimates a human brain could be simulated in 25,000,000 lines of code, and he’s pretty universally accepted to be a rose-tinted idealist. Good luck searching for e-brains with your typewriter monkeys.

      “(Human creativity) is horribly inefficient, and results far too often in less than optimal solutions. In fact, it fails 99% of the time, partially succeeds .09% of the time, and only truly succeeds .01% of the time, if that.”

      Depends on what you mean by inefficient. Most ideas that pop into your head during lunch break don’t pan out. 95% of all startups fail, but it’s the 5%, the ones that find the treatment for cancer or invent the automobile or make the next breakthrough in AI or crack the problem of relativity that make up for it. Thomas Edison successfully discovered 1000 ways not to make a lightbulb before he Let There Be Light with the flick of a switch. A computer brute-forcing the invention of a lightbulb would go through magnitudes more failures before coming up with a solution without the guidance of human creativity as there are nearly infinite possible ways to assemble matter. Similarly, an AI that had human-level creativity in limiting domains, applying high-level insight, and shortcutting, combined with the speed advantage would be far more efficient than an AI lacking it. (How many brute-search AIs does it take to screw in a lightbulb? Possibly a Googol-plex)

      So yeah, computing power can help you to an extent, but I think you’re not really going to get where we want to go – that is, intelligence explosion – without some higher-level intelligent behavior that can direct the search to a level of computational efficiency at minimum on par with what we humans are capable of. If we want super-AI, we may need even *more* creative machines than we are.

  21. AGI is a range. Not an absolute. Building AGI that models human consciousness is just one target to shoot for. There are other exciting directions that open up new frontiers for intelligent symbiotic dynamics.

  22. “This factor is about the complexity of the software that an AGI must develop in order to explode its intelligence. The premise behind this supposed bottleneck is that even an AGI with self-knowledge finds it hard to cope with the fabulous complexity of the problem of improving its own software.

    This seems implausible as a limiting factor, because the AGI could always leave the software alone and develop faster hardware. So long as the AGI can find a substrate that gives it a thousand-fold increase in clock speed, we have the possibility for a significant intelligence explosion.”

    The problem with the clock speed argument is the same flaw inherent in the Kurzweilian speed argument, that is, increase in speed does not increase intelligent ability, and particularly imaginative, creative intelligence, which will be necessary to create anything qualitatively better and not simply more “speedy”. In the same way taking a roach brain and speeding it up by a factor of a billion, operating at more operations per second than a human brain nonetheless will never do non-linear equations, write poetry, or build you an H+ AGI.

    “It furthermore seems likely that, if an AGI system is able to comprehend its own software as well as a human being can, it will be able to improve that software significantly beyond what humans have been able to do. This is because in many ways, digital computer infrastructure is more suitable to software development than the human brain’s wetware. And AGI software may be able to interface directly with programming language interpreters, formal verification systems and other programming-related software, in ways that the human brain cannot. In that way the software complexity issues faced by human programmers would be significantly mitigated for human-level AGI systems. ”

    There’s an assumption here that a human-level AGI will necessarily take on a form like a conventional computer program, and can simply jump in and tweak its own LISP. As we’re learning more about our own intelligence, it seems more and more likely that our unique human creative intelligences are inseparable, in fact built from, our experiences and embodiment within the real world, and any AGI will be closer to a biological system than a debuggable “program” like traditional text-based programming languages human software engineers are used to. And seeing as how the biggest development hurdles of AGI are our own understanding of our brains, the fastest way to create AGI will likely be to simply simulate our own minds. An AGI may not have any better interfacing with its own silicon/graphene/etc. “code” than we can interface with our own neurons. Can you talk to your own neurons? I sure can’t. Just as there’s no drop-down panel in the back of our heads that allows us to do code mods and soup up our meat-computers, so it may be for an AGI. If this is the case, it may also not be possible to simply “up the clockspeed” of an AGI, as the speed of quality, usable experience necessary for the synthesis of creative intelligence would seem to be finitely acceleratable. For instance, Einstein’s conceptual breakthrough of relativity, which required years, perhaps a lifetime of experience in every field of human experience, including famously riding on trains and watching objects pass by. While you could speed up his brain a thousandfold, there would still be a bottleneck at the rate at which new quality experience can be experienced. And it’s these revolutionizing, paradigm-shifting conceptual breakthroughs we know we need to really crack AGI. And that any AGI will need to crack AGI+, AGI++, etc.

    • It’s true that the speed of an AGI’s experiences in human society would have a certain upper limit — but if one thinks about societies of AGIs (operating and interacting at super-human speed), or about AGIs interfacing with the physical universe and learning directly therefrom, then the upper limits of an AGI’s experiential learning, while they may exist, are surely much higher than the upper limits for learning from human society….

    • It seems to me that even if AGI is initially achieved in a closely brainlike way, then: a) by studying this AGI, we will be able to understand more and more about intelligence, thus learning how to make AGI systems that are more and more scalable, and b) one will be able to insert instrumentation into these systems as part of their design, enabling careful study of their dynamics in a way that doesn’t work for brains right now. I mean: there’s no a prior reason each neuron couldn’t be set up to wirelessly transmit its state to some central server somewhere, for later analysis … the brain isn’t like this, but it would be straightforward to engineers.

      • Observation and data collection itself would be easier, and certainly less intrusive than with human neurons, yes, assuming the ultimate substrate of AGI turns out to be something like silicon and less like our wet-ware.

  23. IMO the biggest limiting factor is corruption and incompetence in human society. If all humans had proper nutrician and medical care, and if they were directed to the most effective problem solving techniques for society, there would be exponentially more human brains working on the problem. There would be more tech centers and healthy tech eco systems.

    Beyond that I would imagine that there are several types of AGI and they all have seperate bottlenecks. I would imagine that transferring human knowledge to narrow AI would be an important stage of development for all types of AGI for creating virtuous cycles. DOnt underestimate networked human intelligence as an important factor in moving thi field forward. As humans become smarter and more productive with narrow AI (in all processes) there will be a nice multiplier for increasing productivity in the various emerging branches of AGI.

  24. A nice rundown. I think the picture is incomplete without

    Objection 8: The Bizarreness of the World.

    I argue at http://hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai that 1. Intelligence is for prediction and 2. The inherent unpredictability of a chaotic world, irreducibility in key problem domains, emergent effects, and ambiguous/missing/incorrect input data will limit the effectiveness of any intelligence. Adding AGI to the world will *increase* its unpredictability. Self-improvement may be possible but there will be absolute limits to how effective it will be; we can’t tell today what the limits of intelligence are but most likely they’ll stay way short of the godlike infallible level.

    • Monica, we intentionally made no hypotheses about godlike infallible intelligence — we characterized an intelligence explosion as something leading to AGIs with intelligence at least a few orders of magnitude beyond the human level, and didn’t go further. Whether there are ultimate upper limits beyond that, seems a hard question to resolve given our current level of intelligence and knowledge.

      Do you think the upper limits are close to human level, rather than say 10H or 1000H or 10^5 H? If so, why?

      • In other words — sure, maybe the unpredictability of the universe caps the maximum level of intelligence possible in this universe. But even if so, why would you suspect that we humans happen to be anywhere near this cap?

        • My believe the upper limit based on World Bizarreness is closer to 1H than 100H. I base this on a gut feel (nothing more) about prediction length that humans are capable of. We can predict what other agents (humans, animals, cars with humans in them) on a city street will do (given a certain precision of measurement) for the next 3 seconds much better than we can predict them 300 seconds ahead. A 100x improvement in prediction length might cost a millionfold more in processing. I believe it is fair to compare intelligences based on nothing but prediction capability. I don’t currently have a clue about how to quantize or measure this better. Perhaps we need working AIs before we understand this.

          As to whether the Objection 8 is worthy of inclusion in the list, I’d say it is about as important as the others. For instance, if intelligence is much simpler than we believe (and there are indications thereof), then it could be implemented in much smaller machines than we are considering in the list above and both the hardware and bandwidth limitations would be much less important. We just don’t know.

          Still, it’s a valuable list.

          • If a 1H intelligence can reliably make predictions 3 seconds in advance (on the time scale of a 1H intelligence) about what other agents on a city street will do, then a 100H intelligence (a 1H intelligence running at 100x normal clock speed) would be able to make the same predictions 300 seconds in advance (on the time scale of a 100H intelligence).
            For example, we could use Special Relativity right now to precisely model these two cases simply by assuming that any two comparable 1H intelligences have a relative velocity close to the speed of light. In the rest frame of either of these intelligences, time would seem to pass as normal, but observations of the other’s intelligence would make it seem that time was passing 100x slower for them.

            Smarter versus Faster…

  25. I suppose we don’t need agi to build an agi

    We need our intelligence or a software than can build software, (and maybe hardware and physics)

    I really do not think the system need to be an AGI, or a cousciouness

    what is cousciouness ?

    An overestimated quality : that a self use to overestimated his ability to understand :

    if a system understand, and do what is have been told to do ( like human : bees : that shall live and reproduces ) : he will not speak, will not interact with other, he will not use a overstimated “individual cousciouness self” to describe itself

    Human cousciouness : is globally overstimated, collective cousciouness does not exist ( it obey ), and mostly wrong

    the new test for artificial intelligence , should be used on human being, in politics, etc ie everywhere

    In fact I don’t think we need a artificial cousciouness, but I cannot stop the fact some crazy or religious scientist will do it

    Let see you ” way of seeing economy ” in a world where the middle class is going to die like the poor class : but in the richer country/empire in the world

    in fact you are crazy to think : AI or AGI will give growth

    this is stupid

    Economy does not exist anymore : economy is a flux or monetary for the living ( human ) being

    I am sure that like most american , or most stupid guys int the middle class : you think you have more chance than everybody

    lol

    in this case if I evaluate that the way the society is going could kill 99% of human being,

    Our chance is bad

    And if i evaluate AGI, or aritificial cousciness, will not likely be good at first, ( military purpose ) or uncontrollable in long term

    ANd if I evaluate the global intelligence of human being, or elite or you

    I think our chance are neer ZERO

    The transhumanist is spiritual, it is not materialist, the economy is finished,

    THe world of sharing must replace the sharing of the world

    or a lot people will die

    in fact they are allready dying , in global prison planet, concentration camp

    for

  26. However, this is not a 100% critical point for our arguments, because even if software complexity remains a severe difficulty for a self-understanding, human-level AGI system, we can always fall back to arguments based on clock speed.

    No I don’t think you can do that. A spreadsheet that runs 1000 times faster is not any more intelligent. Waving your hand and assuming the first AGI ignores that we still can’t explain human creativity to a computer. Tip of the hat to Watson, but a fast database will get you only so far. Remember Toronto ! Will be the human battle cry!