Sign In

Remember Me

Singularity 101 with Vernor Vinge

SINGULARITY 101 with Vernor Vinge

The Singularity. Ray Kurzweil has popularized it and, by now, some of our readers no doubt drop it frequently into casual conversation and await it like salvation. (The second “helping?”) but many more are still unfamiliar with the concept.

The contemporary notion of the Singularity got started with legendary SF writer Vernor Vinge, whose 1981 novella True Names pictured a society on the verge of this “event.” In a 1993 essay, “The Coming Technological Singularity,” Vinge made his vision clear, writing that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

We caught up with Vinge at the 2008 Singularity Summit in San Jose, California, where he opened the proceedings in conversation with Bob Pisani of CNBC.

Vinge’s most recent novel is Rainbow’s End.

h+: Let’s start with the basics. What is the Singularity?

VERNOR VINGE: Lots of people have definitions for the Singularity that may differ in various ways. My personal definition for the Singularity — I think that in the relatively near historical future, humans, using technology, will be able to create, or become, creatures of superhuman intelligence. I think the term Singularity is appropriate, because unlike other technological changes, it seems to me pretty evident that this change would be unintelligible to us afterwards in the same way that our present civilization is unintelligible to a goldfish.

h+: Haven’t there been other Singularities throughout history?

VV: Some folks will say there have been singularities before — for instance, the printing press. but before Gutenberg, you could have explained to somebody what a printing press would be and you could have explained the consequences. Even though those consequences might not have been believed, the listener would have understood what you were saying. But you could not explain a printing press to a goldfish or a flat worm. And having the post-Singularity explained to us now is qualitatively different from explaining past breakthroughs in the same way. So all these extreme events like the invention of fire, the invention of the printing press, and the evolution of cities and agriculture are not the right analogy. The technological Singularity is more akin to the rise of humankind within the animal kingdom, or perhaps to the rise of multi-cellular life.

h+: Is the Singularity near?

VV: I ‘d personally be surprised if it hadn’t happened by 2030. That doesn’t mean that terrible things won’t happen instead, but I think it is the most likely non-catastrophic event in the near future.

SINGULARITY 101 with Vernor Vingeh+: Should we be alarmed by the Singularity?

VV: You are contemplating something that can surpass the most competitively effective feature humans have — intelligence. So it’s entirely natural that there would be some real uneasiness about this. As I said, the nearest analogy in the history of the earth is probably the rise of humans within the animal kingdom. There are some things about that which might not be good for humans. on the other hand, I think this points toward something larger. Thinking about the possibility of creating or becoming something of superhuman intelligence is an example of an optimism that is so far-reaching that it forces one to look carefully at what one has wanted. In other words, humans have been striving to make their lives better for a very long time. And it is very unsettling to realize that we may be entering an era where questions like “what is the meaning of life?” will be practical engineering questions. And that should be unsettling. On the other hand, I think it could be kind of healthy, if we look at the things we really want, and look at what it would mean if we could get them. And then we could move forward from there.

h+: What signs would you look for which indicated that the Singularity is near?

VV: There are a number of negative and positive symptoms that a person can watch for. An example of a negative symptom would be if you began to notice larger and larger software debacles. In fact, that’s sort of fun to write about. One of the simplest of positive signs is simply to note whether or not the effects of Moore’s Law are continuing on track.

The fundamental change that may be taking place — humans may not be best characterized as the tool-creating animal but as the only animal that has figured out how to outsource its cognition — how to spread its cognitive abilities into the outside world. We’ve been doing that for a little while ten thousand years. Reading and writing is outsourcing of memory. So we have a process going on here, and you can watch to see whether it’s ongoing. So, for instance, in the next ten years, if you notice more and more substitution for using fragments of human cognition in the outside world — if human occupational responsibility becomes more and more automated in areas involving judgment that haven’t yet been automated — then what you’re seeing is rather like a rising tide of this cognitive outsourcing. That would actually be a very powerful symptom.


  1. To those using the Borg analogy, you are off base. To those that think we will see it coming and likening the Technological Singularity to prior so called singularities I think you are missing the point too. The Borg were a fairly primitive cyborg construct that does not scale well with what we could be looking at when you have a confluence of AI, nano tech, and bio tech.

    Vinge’s point about the printing press is accurate IMHO because the printing press was not the culmination of an exponential process, as is technological change today. I believe that with the exponential nature of technological change, if an HE AI was developed, we would be looking at a matter of days before the world we (hopefully still) live in would be unrecognizable from it’s post singularity state.

    • Agreed. The post-Singularity AIs, with their orders-of-magnitudes greater than human intelligence, will be able to make massive amount of such “one-off” inventions within very short amount of time.

  2. It’s an interesting concept, personally, I’m not as optimistic as Kurzweil – The human brain is far too complex to map every single thought process, synaptic transmission etc etc, computers are fundamentally serial by design having to complete one task before it moves on to the next, the brain on the other hand is completing thousands, if not millions of processes at any given moment. Quantum mechanics will be the only way to tackle the singularity progression, however, I still have my doubts as to even that being able to compete, but we shall see.

  3. Beyond the event horizon of a spacetime singularity the laws that govern the known universe are no longer applicable; the nature and attributes of space, time, and energy are fundamentally changed. The singularity itself cannot be directly observed, only the effect that it has on matter and space outside of the event horizon (gravitational lensing, polar x-ray emission, etc.) can be used to determine its location and existence. Mass and or information continues to be absorbed, but no mass or information is forthcoming. Perhaps similar restrictions occur in all true states of singularity; this might explain why there has never been a documented case of contact with an advanced race or consciousness. When a race evolves to the point where a social/intellectual/technological singularity occurs (a critical mass of intellect) the flow of information and resources is halted and/or reversed.

  4. LOL. sorry I keep seeing stuff on this topic. Personally I see the opposite, not a singularity but a branching deviation that grows, and yes evolves. The evolution doesn’t happen over night as some imply, but happens on a longer time frame just outside the majority of most peoples perception, and then seems more natural to humans. After all not all humans want gene therapy or modifications, and they will not all want to link to a interconnected hive like sentience. Human deviations will go on as long as there is a human race. Now the better question is when do we become something so much different from humans, that the evolved state seem alien or deity/godlike. As to the poster on 1000 core or company site . That doesn’t touch quantum photonic chips P. Threw great diversity, wealth and knowledge will evolve faster, and I mean wealth as not just tangible trinkets. Oh well just 2 for the pot so to say.

    • I think that unless there is a skynet type of startup, it will be like every other creation a slow start, popularity by early adopters, and final mass acceptance once it becomes inexpensive enough for most people, but not everyone will want it.

  5. So is this similar to the Borg creatures on Star Trek? That’s what it sounds like to me. It’s an interesting concept, although I really don’t see the evidence nesseccary for such to occur. Perhaps I don’t know enough on this subject, but it still seems unlikely to happen in my lifetime. What are the pros and cons of such an event occuring. I have many questions I feel are unanswered.

  6. This can never happen. It has been predicted before. It was the promise of artificial intelligence. The current processors are too primitive. The leap in human intelligence can only happen perhaps in one or two millon years but by then we will have destroyed the life sustaining ability of the Earth. What most likely will happen is extermination of military weak populations via robotic warfare to capture their resources like in Irak. Sooner or later we will have a big confrontation with China or Rusia and this event will be used to exterminate 99% of the population. The controllers in our soociety have already figure it out that it is the population explosion that must be curtailed. The population of the Earth must be reduced to those who deserve to live because of their economic and military power. The rest of humanity must therefore be eliminated before they consume all the resources. Saddly, as we are witnesses, the more knowledge we accumulate the more rapiddly we destroy our own nest. A leap in intelligence will only translate into a leap in destruction.

  7. I think that unless there is a skynet type of startup, it will be like every other creation a slow start, popularity by early adopters, and final mass acceptance once it becomes inexpensive enough for most people, but not everyone will want it.

  8. The missing piece in AI is the pattern recognition. A human can hear a few notes of a poorly whistled tune and recognize it, fill in the rest of the song, remember the composer, remember hearing it played in concert, etc.

    It doesn’t seem to be processing power alone that is missing, but something more fundamental about pattern matching algorithms.

    So I’d look toward pattern matching as an indicator of progress toward AI and super-AI. And if we get a breakthrough in understanding how this works in humans, then the Singularity predictions could follow very quickly after.

  9. I’m pretty sure I disagree that the Singularity “can never happen”. Scientists in nanotechnology, biology, and computer science–just to name a few–seem pretty confident in the progression of technical tools. Nano-processing is currently being employed to create computer chips. What we see is a future that expands greatly on our already close-to-transhumanism.

    Of course, that’s just with hardware. Software must keep up–as Vinge has pointed out in various other forums and articles. Kurzweil as well.

    Now that said, I believe that we do face a real problem of social inequality–one that technology Shift or Singularity could exacerbate in terms of the disparity we already face as a species. We could face this as an engineering problem as we develop greater tools and move beyond our current (and obviously flawed) economical concepts–or we could take some road not unlike that described by Dural.

    How we treat each other–and as Vinge implies here, whether we decide to ask “What kind of world do we want?”–will be part of what determines the outcome.

  10. It would be comparable to the “Borg” of Star Trek fame, excepting that the Borg indicate a method of self-sustainability through the process of essencially “raping” their victims of what ever is wanted. Be it information, or the creation of a drone. We as a species have a tendency to think of a “collective” as nothing more than a technologically advance “hive mind” analogous to a bee hive or an ant colony.
    However, using the Borg/Rape example, it is equally possible to think of a theoretical trans-humanism being a consensual thing. Humanities need for drama in our entertainment indeed makes us first consider what can go wrong, following a “good and evil” paradigm that our species should have long outgrown. These are merely perspectives.
    It is my perspective that the singularity in question is inevitable, but as it becomes more “real” in the future, we will all re-act to it differently. Some will see it as threatening, like an alien invader. Others will embrace it, and welcome it as just another step in human evolution. Still others may choose to hoard access to it for themselves and not share access to the technology to the average human, giving a majority no opportunity to embrace or reject it. The world is far too complex at this point to know with any accuracy what the form of this singularity shall appear as.
    In the “Technopaganism” talks held by Terrence McKenna and Mark Pesce, they outline in the mid 90’s how we can positively interact with what may be coming down the line. They too saw it as “only a matter of time” in the span of our own lives. Embracing the singularity with wonder and love for where humanity might be going seems the best course of action. We’ve made so many simulations and simulacrums of what the future may hold, in hollywood movies and video games and sci-fi novels, and they may all be wrong. I doubt we can even imagine at this point what life on the other side of the singularity will be like. I take the attitude that we’re on a roller-coaster toward something amazing, and we had better hold on for the ride.

  11. As I have said elsewhere on H+, I do not agree with the Intelligence explosion concept of the Singularity as I am of the “Humanity has experienced many Singularities before” camp.

    You could have explained to a pre printing press person what a printing press was, yes, but they would have had no concepts whatsoever what the printing press actually meant. They would never have been able to foresee that the ability to mass produce books would have lead to the literacy of the masses, the destruction of the Catholic Church’s hold on knowledge and the extremely bloody Reformation, the rise of the merchant middle class and the destruction of the feudal system, or the eventual rise of democratic governments, the creation of a dedicated research social class, or the technological innovations we have today which enable us to contemplate “The Singularity” as defined by a popular BOOK WRITER, all of which were directly or indirectly enabled by mass produced books. Centuries later, we are STILL feeling the effects of that “Singularity”.

    We have already entered the next Singularity. Our current economic crises are all due to the beginnings of the turmoil of shifting from the capitalist mindset of Alpha dominance control of scarce resources to the economy of abundance that a post scarcity society will inevitably have. We can’t see beyond a certain point right now, but that point will recede as we continue into the future. Super-Human intelligence will indeed mark a defining point in this Singularity, but there is far more to it than just that single point.

    Nor do I believe in the runaway recursive self improving AI scenario. While I do believe we will begin increasing intelligence at a extremely rapid pace, there are any number of possible limitations which would prevent that pace from reaching maximum exponential acceleration, not the least of which is the actual time needed to physically move the necessary atoms to restructure the computing substrate in which the AI runs, be it “true AI” or an “Uploaded Human”

    Simply put, I think people will be through the Singularity before most people realize it, and wondering how we missed it by keeping too focused on that ever elusive “instant” it was supposed to occur.

  12. Everyone seems to wonder to what extent the singularity will change our world in our own lifetimes (for some of us younger humans), but is anyone thinking about what our world could be like in 100 years? What about 200 years? Sure most people just don’t care because they figure they won’t be around to see it, but hey, let’s assume that machine intelligence surpasses human intelligence around 2030 as Kurzweil predicts. Can you imagine the world 50 years later? A self improving AI could, after let’s say 30 years, become almost godlike. This is the ultimate paridigm. We are beginning on the road to building the highest creation that an organism in this universe can create- a non-suffering perfect intelligencia.

  13. How’s this for a “software debacle”: Concurrency. In spite of processors coming with more and more cores, programmers don’t have a really good way to write programs so that many concurrent threads can access the same data without either trashing the data or making the threads spend most of their time waiting for their turn to access it. We’re not ready for thousand-core processors.