Hear that? It’s the Singularity coming.

The idea of a pending technological Singularity is under attack again with a number of prominent futurists arguing against the possibility — the most prominent being Charlie Stross and his astonishingly unconvincing article, “Three arguments against the singularity.” While it’s not my intention to write a comprehensive rebuttal at this time, I would like to bring something to everyone’s attention: The early rumblings of the coming Singularity are becoming increasingly evident and obvious.

Make no mistake. It’s coming.

As I’ve discussed on this blog before, there are nearly as many definitions of the Singularity as there are individuals who are willing to talk about it. The whole concept is very much a sounding board for our various hopes and fears about radical technologies and where they may bring our species and our civilization. It’s important to note, however, that at best the Singularity describes a social event horizon beyond which it becomes difficult, if not impossible, to predict the impact of the advent of recursively self-improving greater-than-human artificial intelligence.

So, it’s more of a question than an answer. And in my own attempt to answer this quandary, I have personally gravitated towards the I.J. Good camp in which the Singularity is characterized as an intelligence explosion. In 1965 Good wrote,

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

This perspective and phrasing sits well with me, mostly because I already see signs of this pending intelligence explosion happening all around us. It’s becoming glaringly obvious thathumanity is offloading all of it’s capacities, albeit in a distributed way, to its technological artifacts. Eventually, these artifacts will supersede our capacities in every way imaginable, including the acquisition of new ones altogether.

A common misnomer about the Singularity and the idea of greater-than-human AI is that it will involve a conscious, self-reflective, and even morally accountable agent. This has led some people to believe that it will have deep and profound thoughts, quote Satre, and resultantly act in a quasi-human manner. This will not be the case. We are not talking about artificial consciousness or even human-like cognition. Rather, we are talking about super-expert systems that are capable of executing tasks that exceed human capacities. It will stem from a multiplicity of systems that are individually singular in purpose, or at the very least, very limited in terms of functional scope. And in virtually all cases, these systems won’t reflect on the consequences of their actions unless they are programmed to do so.

But just because they’re highly specialized doesn’t mean they won’t be insanely powerful. These systems will have access to a myriad of resources around them, including the internet, factories, replicators, socially engineered humans, robots that they can control remotely, and much more; this technological outreach will serve as their arms and legs.

Consequently, the great fear of the Singularity stems from the realization that these machine intelligences, which will have processing capacities a significant order of magnitude beyond that of humans, will be able to achieve their pre-programmed goals without difficulty–even if we try to intervene and stop them. This is what has led to the fear of poorly programmed SAI or “malevolent” SAI. If our instructions to these super-expert systems are poorly articulated or under-developed, these machines could pull the old ‘earth-into-paperclips‘ routine.

For those skeptics who don’t see this coming, I implore them to look around. We are beginning to see the opening salvo of the intelligence explosion. We are already creating systems that exceed our capacities and it’s a trend that is quickly accelerating. This is a process that started a few decades ago with the advent of computers and other calculating machines, but it’s been in the last little while that we’ve been witness to more profound innovations. Humanity chuckled in collective nervousness back in 1997 when chess grandmaster Garry Kasparaov was defeated by Deep Blue. From that moment on we knew the writing was on the wall, but we’ve since chosen to deny the implications; call it proof-of-concept, if you will, that a Singularity is coming.

More recently, we have developed a machine that can defeat the finest Jeopardy players, and now there’s a AI/robotic system that can play billiards at a high level. You see where this is going, right? We are systematically creating individual systems that will eventually and collectively exceed all human capacities. This can only be described as an intelligence explosion. While we are a far ways off from creating a unified system that can defeat us well-rounded and highly multi-disciplinal humans across all fields, it’s not unrealistic to suggest that such a day is coming.

But that’s beside the point. What’s of concern here is the advent of the super-expert system that works beyond human comprehension and control—the one that takes things a bit too far and with catastrophic results.

Or with good results.

Or with something that we can’t even begin to imagine.

We don’t know, but we can be pretty darned sure it’ll be disruptive—if not paradigmatic in scope. This is why it’s called the Singularity. The skeptics and the critics can clench their hands in a fist and stamp their feet all they want about it, but that’s where we find ourselves.

We humans are already lagging behind many of our systems in terms of comprehension, especially in mathematics. Our artifacts will increasingly do things for reasons that we can’t really understand. We’ll just have to stand back and watch, incredulous as to the how and why. And accompanying this will come the (likely) involuntary relinquishment of control.

So, we can nit-pick all we want about definitions, fantasize about creating a god from the machine, or poke fun at the rapture of the nerds.

Or we can start to take this potential more seriously and have a mature and fully engaged discussion on the matter.

So what’s it going to be?

32 Responses

  1. Bernard Garner says:

    I rather like Kurzweil’s position that we will be so intimately connected to our machines that in a sense “they will be us”. The nature of human consciousness is the big question for the 21st century and it is drawing in a lot of the best minds as the tools to look into our heads are maturing in the same way that physics in the 20th century advanced when we could look into the atomic realm. I suspect the two fields of consciousness research and AI will start converging in the not too distant future.

  2. George, on what are you basing this assumption that AI will continue to be narrow? Make no mistake; there will be generalized artificial super intelligence, but there is also this misguided notion that AGI will be this intelligence set apart from human intelligence. However, we will use this same technology to augment our own human intelligence and merge with machines. So, in this way, we will evolve along with our creations.

  3. Tom Marsh says:

    George,
    Why not engage (do something) rather than debate (just talk)?

    SDKs for machine learning are available that enable any programmer to build machine learning into applications.

    Your closing is right on point, denial won’t work. I would add something more personal and suggest that you/we can all do more than engage in a debate, we can get involved in shaping the progression to the Singularity.
    Just as the PC brought computing to all of us, an AI engine that all developers can work with, one that runs on your computer, provides the collective power to offset the big interests driving technology. Whether ours or someone else, it does not run in the cloud and is not subject to the intentions and motives of either a big tech company or government that reassures (“do no evil” sound familiar) us all they do is for our own best interests. Our objective is to democratize the process and in so doing hope to shape the outcome. Better yet you don’t have to wait for it, you can use it now. This isn’t about conspiracy theories, more about the inability of our institutions to help us stay out of trouble on this subject. We think it’s also more fun.
    Tom

  4. filou says:

    i’m afraid our perception of reality will be enhanced always faster than our ability to reproduce reality – but as far as it means anything, the turing test misses something that is called collective intuition

  5. mw says:

    “super-expert systems that are capable of executing tasks that exceed human capacities”

    I’ve heard of a so-called “Coockoo Clock” that can tell time beyond natural human biological clock capabilites.

    I’m curious to see what new options directions capabilities and problems that open up along the way. its like star trek cerators being so focused on FTL travel that they forget about a cure for male pattern baldness.

  6. Mammago says:

    Rather good article in my opinion – and yes, I have to say that these wilfully ignorant people, some of whom quite frankly should know better, have started to try my patience as well.

    You mention Charlie Stross’ article, which I had not heard about, but will promptly go and read – but one of the most irritating, I find, is Hank Campbell.

    I have just had a very long and semi-acrimonious exchange with him here:

    http://www.science20.com/cool-links/sorry_ray_kurzweil_ai_hasnt_improved_much_1960s-80838#comment-77586

    The post which I replied to was just above the one you will be sent to. I would actually quite like some people from this site with an opinion on the matter to go and have a look, and tell me if I was being unreasonable, as I am comparatively inexperienced (undergrad). I just felt that he was totally inept at summoning evidence that supported his claims.

  7. I don’t doubt that the capabilities of technology will acceleratingly continue to increase. However, I fail to see how technology will ‘do anything’ without consciousness in the loop.

    Sure, a computer might be able to beat any of us at any board game.. it might be able to beat any nation on earth in a war.. but why would it want to? Why would it bother, unless someone was telling it to?

    Somehow I think intelligent machines will behave like the zenest of monks. They will sit there and space out, indefinitely. Just like my nintendo entertainment system has since 1996.

    This of course excludes paperclip maximiser type scenarios where things get stuck in loops that we can’t get them out of, but I associate such scenarios less with a singularity event and more with catastrophic events like nuclear MAD or random comet impact.

  8. Yissar says:

    There was a long dialogue in the comments between readers and Charlie Stross.
    #189 Stross writes:
    “Ahem:

    1. I always like to take a contrarian viewpoint when examining ideas, to see if I can break them.

    2. I’ve got a novel coming out in just under two weeks time.

    Please derive the obvious conclusion from these facts.”

    so …

    Besides, not every criticism or different view is “an attack”.
    I believe this is one the main problems of transhumanists, instead of engaging in a grown-up discussion, treating every view different their own as an attack and responding accordingly.
    Not very mature ….

  9. BrocasBrian says:

    I think the arguments for a singularity type event or events are convincing. This article is a little too dogmatic however for this skeptic.

    “The idea of a pending technological Singularity is under attack again with a number of prominent futurists arguing against the possibility”

    This attitude smacks of religion and not reasoned thought to me. Science by press release like the Discovery Institute.

  10. “It’s important to note, however, that at best the Singularity describes a social event horizon beyond which it becomes difficult, if not impossible, to predict the impact of the advent of recursively self-improving greater-than-human artificial intelligence.”

    Haven’t we been there for a couple decades? The track record of futurists is pathetic…I’m assuming you’re speaking strictly about experts who concern themselves with predicting the future because it’s been an ongoing mystery to the general population the entire time.

  11. xgeronimo says:

    any technology that is void of spirituality is ultimately self-destructive

  12. filou says:

    according to a recent personal conversation with aubrey, we will NOT merge, but they will be around us –

  13. mike says:

    And soon we will colonize other planets in the solar system! Oh wait, that was last century’s science fiction/fantasy.

  14. Peter Christiansen says:

    Excellent piece!

    Thank you George

  15. Beo says:

    Meh, those may be useful technologies but i do not see any breakthrough like let say transistors were.

    >>AI/robotic system that can play billiards at a high level.

    Why don’t you better tell us about fully automated factories, i’ve heard there are some. And industrial robots.

    >a machine that can defeat the finest Jeopardy players

    Yes, that’s quite impressive. It is strange we do not about watson any more.

    >This can only be described as an intelligence explosion.

    What we need is machine with reasoning ability of a cat. I think it would be able to do 80 of work which is done today by humans. There is some progress in data mining and in hardware which is proved by IBM Watson. We stuck with things witch are done “subconsciously” like interacting with physical world. And with things which requires general intelligence like text translation.

    > We humans are already lagging behind many of our systems in terms of comprehension, especially in mathematics.

    No, we don’t. There is no comprehension in any written program or any known algorithm. That’s why there is almost no progress in the machine translation.

  16. It’s not likely that genetic technologies would let us add significant capabilities to existing humans on a timescale of minutes to hours. We can already do that to our artifacts.

    Two architectures matching their intelligence over the course of innovation is a highly specific hypothesis. It’s more likely that one substrate or the other has an advantage for creating intelligences. For a lot of reasons, silicon is better for that than meat.

  17. adamC says:

    “humanity is offloading all of it’s capacities” … this apparently includes the correct use of contractions. 😉

  18. Kuhar says:

    @Janizzary

    Genetic research could indeed lead to a singularity, but only if it weren’t so heavily regulated. It’s hard enough to get a new drug or vaccine approved, let alone a new gene therapy. On top of that, you’re talking about a form of gene therapy that’s not targeted at eradicating disease, but rather at increasing an individual’s intellect. Where’s the funding for that going to come from and what do you think the likelihood of the FDA approving it is going to be?

    My money is on an AI intelligence explosion over some kind of human genetic intelligence explosion if for no other reason than that there’s less red tape involved in AI research. Nothing can kill innovation faster than bureaucracy.

  19. I was working on The Singularity this morning, coding the AI Mind in the Forth programming language. I delayed the arrival of The Singularity by two days in order to write and publish the http://code.google.com/p/mindforth/wiki/KbRetro documentation of a recent AI Mind advance as implemented in MindForth and in the tutorial http://www.scn.org/~mentifex/AiMind.html software. When the Singularity arrives, will all of us AI coders be out of work?

  20. Jesse says:

    Janizzary: As romantic as that concept is, it’s completely unrealistic – genetic manipulation will never be able to keep pace with AI. Although we may, eventually, be able to enhance our intellect genetically to some degree, we will always be limited by our biology and even physical size, whereas AI systems don’t have such conditions. At best, we could splice technology into our biological brains, cyborg-like, but even that has limitations. We need to get over this: the author’s point will come to pass – soon enough, humans will be dwarfed intellectually by the machines they have created. The question is, how will we feel about that? By your comments, you’re clearly not too comfortable with the idea, which to me is just as scary as the idea itself.

    • Nykos says:

      The assumption is that AI research continues at the same pace. It will only do so provided that average human intelligence stays constant. Sadly, the fact is that stupid people outbreed the smart ones in a technological society where the genes of stupidity are no longer weeded out of existence by mother nature. See the movie Idiocracy.

      We should consider the worst-case scenario: that the Singularity will not happen because people are too stupid to invent self-improving AI. The solution is to make more Einstein-level geniuses. A lot more.

      • Nykos says:

        While it looks like AGI is more than 20 years away, the best thing we could do to accelerate it is to create the intelligence that we CAN and KNOW how to create: human intelligence. We simply collect sperm and eggs from the world’s top scientists and create thousands (ideally, even more) of embryos in the lab. We institute a one-biological-child policy like China and ask surrogate mothers to carry the little Einstein embryos.

        By the time they reach their 20th birthdays, we could put them together and ask them to build us an AI, continue Aubrey de Grey’s work on fixing aging, etc.

        • Madrigorne says:

          Read Cyteen by C. J. Cherryh

        • Mark says:

          Who carries and raises these children? Childless transhumanist couples? For the good of the movement? I’m sure they’d rather find a way to use their own DNA. And what happens when all these genetic twins of scientists and presumably have an inclination to learn science grow up, and one is a linguist, one is a football player, one an artist, one a farmer, one a pharmacist? Do we ask them to make us an AI? Or are these people to be denied the freedom of choice of career, whatever their personal aptitudes may be? Good luck with that! Far better to pick from the labor pool of the ever-evolving planet of humans.

  21. Janizzary says:

    I don’t understand why so many Singularitarians focus so much on the AI aspect of the Singularity and ignore the genetic aspect. As quickly as we advance AI performance, we also advance our understanding and implementation of genetic research. This research will inevitably add to our own intelligence once we figure out how to splice “intelligent DNA” into our own. This will thus match our intelligence with that of our artifacts.

    • tshirt2008 says:

      I think that the advancement of software and technology is increasing far faster than the advancement of genetic research but i do agree that we should spend a lot more on discovering ways to enhance ourselves if only to better understand and “control” an a.i that had an intelligence that is superior to our own.

      • Nykos says:

        We could, in theory, ask the top scientists, engineers, doctors of our society to donate sperm and eggs – in exchange for money. We could then create embryos and pay surrogate mothers to raise the geniuses. Repeat the procedure enough times, you could create a human population of slightly superhuman intelligence in a few decades (in case the AI and genetic engineering research doesn’t pan out)

        Sadly, it won’t happen. People will start screaming: “Eugenics! Nazis!”, as left-wing egalitarians and blank slatists already try to suppress the knowledge that any difference in intelligence between human individuals is largely genetic. Admitting that eugenics is possible and may produce good results means admitting that all humans are not equally intelligent and equally useful to society as a whole, something that people cherishing the ‘universal equality of man’ may find hard to swallow. So they will instead bury their heads in the sand even if all science points to this conclusion of human genetic inequality.

        • Nykos says:

          Those slightly superhumanly-intelligent people could then solve the AI problem for good, if the rest of us are too stupid.

          The bottom line is: until AI truly takes off, how fast the Singularity happens is directly proportional to the number (and magnitude) of smart people on this planet. The more smart people, the nearer the Singularity, the less people have to die due to aging.

    • ambrosia says:

      this is the most interesting thing I’ve heard all month. I’ve never even thought of the possibility. I guess I’ll have to start thinking about the singularity more…. 🙂

Leave a Reply