Why The Singularity Is Boring

This article had better begin with a correction: the Singularity is not boring. On the contrary, the concept of engineering greater-than-human intelligence is one that raises fascinating scientific and philosophical questions. What we think we know about the future, based on our past experiences, could turn out to be radically misguided if the future includes a technological singularity. Greater-than-human intelligence could change everything.

We should exercise caution when speaking of the Singularity. Despite what some people claim, nobody knows when a Singularity is going to happen and nobody knows what consequences will follow. Amidst all these claims, I don’t see people exercising any kind of caution and due ignorance when talking about the Singularity. Instead, I see people using it as a convenient technological fix. And I have a problem with that.

I fully subscribe to the transhumanist dream and the goals the transhumanist movement espouses. Things like engineered negligible senescence, the eradication of poverty through molecular manufacturing and the end of boring work thanks to robots laboring tirelessly on our behalf; these strike me as reasonable expectations. I know of no physical laws forbidding these dreams from being fulfilled. That they have not been fulfilled yet is due entirely to the current state of our own ignorance.

But, just because we can reasonably expect such things to be possible does not mean we should consider them easy, inevitable or both. We should acknowledge the immense technical challenges we face in trying to engineer any one of these scenarios, let alone a glorious future in which all our problems are solved and utopia is achieved. Try pointing this out over at the KurzweilAI.net forums though, and someone will almost certainly reply that no problem will delay the fulfillment of transhuman dreams very long. Why? Because the Singularity will solve them for us.

Israeli-British physicist David Deutsch wrote an essay in April for New Scientist called “Why Science is the Source of All Progress.” In it, he explains what makes a good scientific explanation. He asks us to consider two explanations for why we have seasons. One is that the harvest goddess, Demeter, becomes sad because her daughter Persephone must temporarily depart to Hades; her sadness causes winter. That was the ancient Greek explanation. Another is that the Earth’s axis is tilted relative to the plane of its orbit. This means that, during half the year, the northern hemisphere is tilted toward the sun and the southern hemisphere is tilted away, while the reverse is true for the second half of the year. That is our explanation for why we have winter.

Deutsch points out that both explanations are testable since they could, in principle, be falsified through experiment and observation. In that sense, both are scientific. But, as Deutsch wrote, “their (the Greeks’) underlying explanations are easily varied, myths can accommodate any new experience.” In other words, this explanation can be infinitely adjusted so that it stands up against any falsifying experiment or observation. In the end, it’s “not even wrong.”

The ancient Greeks also gave us the phrase “deus ex machina,” which the Oxford Dictionary defines as “an unexpected power or event saving a seemingly hopeless situation, especially as a contrived plot device in a play or novel.” A deus ex machina makes audiences and readers roll their eyes when they encounter it in a play or a story, and we should likewise roll our eyes when we encounter a deus ex machina being used to resolve all questions regarding the feasibility of achieving transhuman goals within our lifetime. “The Singularity will fix it” is a deus ex machina.

It also turns transhumanism into an infinitely variable explanation. Just like the myth of Demeter, you can continue to believe in the swift and inevitable success of transhuman dreams if you can invoke a godlike power that can fix anything. Hell, you can even posit a total rewrite of the laws of physics, thanks to the Singularity hacking the program that runs the universe. So even if some of our dreams turn out to violate physical laws, there is no reason to abandon faith.

To me, there is something deeply troubling about using the Singularity as a kind of protective barrier against all skepticism regarding the likelihood of achieving transhuman goals within a generation. It is difficult to reason with people who use the Singularity concept in this way, and even harder to have a logical debate with them. They have a deus ex machina to hand that can demolish any argument designed to show that transhuman dreams will not inevitably come true within our lifetime. This kind of reaction takes reasonable, scientific expectations of a brighter future and pushes them dangerously close to being an irrational pseudo-religion. And I find pseudo-religions boring.

14 Comments

  1. Do you think we’ll get there by 2025, considering that people have been trying just that since the first clay homunculus? 🙂

  2. Guys, I hate to burst your bubble, but there’s something a little off about your conception of Artificial Thinking Machines and Artificial Intelligence.

    Mr. Turing was not wrong per se but he was extremely limited in reasoning. The definition of a truly intelligent being is not something that fools the other into thinking that it is intelligent, because “to pretend” is only an algorithmic subset of “to be human/to be alive” and current robots are designed to only pretend and react.

    I propose the following instead of the Turing Test. It would most definitely prove wether an A.I, or any other form of artificial life for that matter, physical of virtual, is worthy of being judged intelligent and alive. I call it The Mirror Test and it’s a fourfold process that takes into account the fact that being intelligent and alive has to do with more than language/imitating language but also with the ability to be an active causal agent and to discern and decide between causal events as well as react to their effects.

    An artificial entity is to be deemed alive and intelligent if:

    1. It can spontaneously ask itself what it is and also be able to answer that question unaided;
    2. It’s capable to reverse engineer itself and to build/program another entity that the creator machine itself cannot tell for sure if it is alive or not and that can reverse engineer itself and build another intelligent entity that’s sufficiently alive that the second order creator machine cannot tell for sure if it is similar to itself or not. This could go on indefinitely, although only one spontaneous reverse engineering and structuring of an other is needed to actually prove that the entity is alive and intelligent.
    3. It can either play an “open environment” computer game or exist in an open-ended physical context that involves both action and communication (real life), either cooperatively or competitively with the other entities (human or not), without the other entities realizing that the entity is artificial. In layman’s terms, an entity is alive if it can act willingly, spontaneously and in real time instead of just responding to action.
    4. It is capable of spontaneous and coherent cognitive processes with novel results, conscious or not (i.e. imagination, dreams) as well as being able to describe them and wanting to do so without any external stimuli.

  3. Kurzweil is an optimist, but if you pay attention to his book about the Singularity there are significant discussions about the risks faced along the way. Some will be overcome by technology, some must be faced soberly with tough choices about the kind of people we are. So, Kurzweil himself is not the kind of “Kurzweilian optimist” you paint a caricature of here.

  4. and this is why I am getting cryonically frozen. I sure as hell won’t be around to deal with any sort of singularity and the problems involved in getting even remotely close to it. Good read.

  5. >I have met a lot people focused on the Singularity who aren’t naive at all<

    Yes, the annual Singularity Summits feature excellent speakers offering clear-headed analyses of the R+D currently underway which, in some way or other, could contribute eventually to a Singularity. I definitely do not want to give the impression that everybody thinking about the Singularity talks like it's a deus ex machina which will sweep away problems just like that.

  6. I have a problem with being described as a “Singularitarian” primarily because I’m far more worried about the effects of cumulative exponential change than I am “Greater than Human Intelligence.”

    There are a lot of changes about to occur due to the technology “Normal Human Intelligence” is making RIGHT NOW that will be extremely disruptive, and potentially catastrophically so, yet these changes are ignored, overlooked, and often times dismissed simply because too many people are simply unwilling to contemplate them. I’ve been accused repeatedly of “Wild Optimism” or “Excessive flights of fantasy” by simply pointing out the logical end result of these “small changes” as they accumulate will be to create a world so radically different from what we have previously accepted as “normal” that even those of us who ARE actually looking forward have trouble accepting the likely result.

    GTHI is the least of my worries. Coping with Merely Human Intelligence is going to have to come first, and that is likely going to be so difficult that the “Singularity” as defined by the GTHI crowd will seem like a cakewalk in comparison.

    We’ve got to survive US to get to THEM.

  7. To quote Ben Bova in Analog “the barrier to utopia remains what it always has been – Human Cussidness”. The Singularity may solve our material problems but it is the absolute height of naivete to think that it will solve the problems caused by human nature.

  8. @Ron T Blechner : I thought reading your comment about this old wise man on the hills of SF who said to this indian girl coming for a lunch : “how can you justify love if you can’t even justify god ?”

  9. Even optimistic views on the singularity may have undesirable consequences, at least as we simple humans might perceive. It’s quite possible the a super-intelligence will know how to fix our problems rather efficiently, and by necessity will have to act on that even if we don’t feel ready, or don’t understand. How many of us will let ourselves be manipulated by a higher intellect for our own good, and what if it hurts to do so?

  10. Bravo, Extropia!

    The single scariest thing about Singulatarians is that many of them treat the singularity like a religion, where the Singularity Messiah will come and lift all out of oppression and grant eternal life. Deus Ex Machina – when the Ancient Greeks would lower an actor, playing Zeus, onto stage from above, and they would resolve all earthly crisis in the story.

    It’s irrational, and simply swapping one end-times religion for another. It ignores the fact that there are huge ethical and logistical issues that need to be resolved to pave the way for a singularity – if it should ever happen – and which will need to be resolved after it.

    It makes me happy to see your name attached to a well-reasoned, balanced, and yet optimistic outlook view of the singularity.

  11. nice article ; I wonder why Ray Kurzweil doesn’t imply i its predictions the moment when we’ll have the first generation of OLPC post doc physicists (around 2025 according to me). If this famous theory “prostitution should be allowed as far as pimping your brain is not condemnable either” then we’ll have some kind of a first Singularity in 2025, when one million physicists, we’ll have one billion (all of them thinking with ideas mostly, the computers making all the mathematical part yet). But before we can have a computer “conceptually” six billion times smarter than a human brain well, even twice would be a fucking Singularity. Anyway from this point of view, and from many others I guess, the problem is not Deus ex Machina or not, the problem is what do you call namely a religion. (wouarf wouarf)

  12. The control system inherited from our past work from self interest and has a disposition towards goals that produce gain in a relative short timeframe (long terms goals is either forced onto the individual or as a emeregent property of a series of short terms goals beyond the consciousness of the individual). A mind that work towards a goal with profund consequence for the civilation does not do it for the unknown mass, the motives are intimate even if ultimate consequence is not. Most of activities connected to Singularity is motivated by the same problems that drives religion and it is the same dreams and hopes that carry it. What you ask of a human in not possible; a long term goal seperated from any value for the person that carry it.


    But I, being poor, have only my dreams; I have spread my dreams under your feet; Tread softly because you tread on my dreams

  13. Moderator, this comment submission system didn’t work as I expected, sorry.

    Could you please add some quotes to the comment I posted above (or delete, and I’ll repost it).

    These were intended to be clearly indicated to be quotes:

    “Some transhumanists confront of the challenge of massive power asymmetry like children. They see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly. Like children, they have a temper tantrum at any suggestion that the candy could have negative effects as well as positive ones.

    Transhumanists have to grow up. The world is not your candy basket. The technologies we are pushing towards could lead to our demise just as easily as our salvation. You and everything you love could be eliminated by the technologies you were so excited about in the 2010s and 2020s.”

    “The future is not exciting and optimistic. The future is dark and uncertain, imbued with the heavy sense of responsibility we personally have to make things go well.”

    (This comment of mine should of course be deleted, as it is just a message to the moderator.)

  14. I wonder how many singularitarians of this naive and optimistic Kurzweilian kind you describe there actually are.

    Probably you’re correct that there really are quite a lot, though it’s strange from my point of view since I so seldom seem to meet any. Guess I should go take a look at the KurzweilAI.net forums sometime, I haven’t actually ever done that.

    Meanwhile, I have met a lot people focused on the Singularity who aren’t naive at all (or much optimistic either, for that matter). See e.g. this recent essay by Michael Anissimov, the Media Director for SIAI (Singularity Institute for Artificial Intelligence):

    Security is Paramount

    A couple of excerpts:

    Some transhumanists confront of the challenge of massive power asymmetry like children. They see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly. Like children, they have a temper tantrum at any suggestion that the candy could have negative effects as well as positive ones.

    Transhumanists have to grow up. The world is not your candy basket. The technologies we are pushing towards could lead to our demise just as easily as our salvation. You and everything you love could be eliminated by the technologies you were so excited about in the 2010s and 2020s.

    The future is not exciting and optimistic. The future is dark and uncertain, imbued with the heavy sense of responsibility we personally have to make things go well.

    Hmm, your article might have actually benefited from the mention that there are many people involved with the Singularity who do have such a more mature view of the subject matter.

Leave a Reply