Sign In

Remember Me

Eleven Ways to Avoid an Extremely Bad Singularity

Ben GoertzelI just returned home from the 2009 Singularity Summit in New York, which was an extremely successful one: nearly 800 attendees, a diverse speaker list, and an overwhelming amount of interesting discussion. The 2008 Summit had a lot of great stuff including Intel CTO Justin Rattner on his firm’s potential role in the Singularity, and Dharmendra Modha from IBM talking about their DARPA-funded brainemulation project — but this year’s Summit broadened the focus,including many newcomers to the Singularity scene such as SF writer Gregory Benford talking about his work at Genescient creating longevity remedies via studying long-lived "Methuselah flies", Ned Seeman on DNA computing, Stuart Hameroff and Michael Nielsen on quantum computation and its potential importance, and Stephen Wolfram on how Wolfram Alpha fits into the big picture of accelerating technological change. All in all this year’s Summit was a bit more hard-sciency than the previous ones, and I fully approve this shift. It is after all science and technology that are (potentially) moving us toward Singularity.

After the Summit itself there was a 1.5-day workshop involving many of the Summit speakers, along with a handful of other "thought leaders." This was more a "discussion group" than a formal workshop, and the talk ranged far and wide, including topics both intergalactically speculative and down-to-Earth. What I’m going to report here is one of the more out-there and speculative discussions which I was involved in during the workshop — not because it was the most profoundly conclusive chat we had, but rather because I found the the conversation fun and thought-provoking and I think others may agree…

The Singularity Summit 2009

The topic of the discussion was "How to Avoid Extremely Bad Outcomes" (in the Singularity context). The discussion got deep and complex but here I’ll just summarize the main possible solutions we covered.

Surely it’s not a complete list but I think it’s an interesting one. The items are listed in no particular order. Note that some of the solutions involve nonstandard interpretations of "not extremely bad"!

Of course, this list is presented not in the spirit of advocacy (I’m not saying I think all these would be great outcomes in my ownpersonal view), but more in the spirit of free-wheeling brainstorming.

(Also: many of these ideas have been explored in science fiction in various ways, but giving all the relevant references would n-tuple the length of this article, so they’ve been omitted!)

1. Human-enforced fascism
This one is fairly obvious. A sufficiently powerful dictatorship could prevent ongoing technological development, thus averting a negative Singularity. This is a case of a "very bad outcome" that prevents an "extremely bad outcome."

The Singularity Summit 2009 - Photo credit: SingularityU2. "Friendly" AGI fascism
One "problem" with human-enforced fascism is that it tends to get overthrown eventually. Perhaps sufficiently powerful technology in the hands of the enforcers can avert this, but it’s not obvious, because often fascist states collapse due to conflicts among those at the top. A "Guardian" AGI system with intelligence, say, 3x human level — and a stable goal system and architecture — might be able to better enforce a stable social order than human beings.

3. AGI and/or upload panspermia
Send spacecraft containing AGIs or human uploads throughout the galaxy (and beyond). That way if the Earth gets blown up, our whole legacy isn’t gone.

4. Virtual world AGI sandbox
Create an AI system that lives in a virtual world that it thinks is the real world. If it doesn’t do anything too nasty, let it out (or leave it in there and let it discover things for us). Of course this is not foolproof, but that doesn’t make it worthless.

5. Build an oracular question-answering AGI system, not an autonomous AGI agent
If you build an AGI whose only motive is to answer human questions, it’s not likely to take over the world or do anything else really nasty.

Among the downsides are that humans may ask it how to do nasty things, including how to make AGIs that are more autonomous or more proactive about serving certain human ends.

6. Create upgraded human uploads or brain-enhanced humans first
If we enhance biological or uploaded human minds, maybe we’ll create smarter beings that can figure out more about the universe than us, including how to create smart and beneficial AGI systems.

The big downside is, these enhanced human minds may behave in nasty ways, as they’re stuck with human motivational and emotional systems (pretty much by definition: otherwise they’re not humans anymore). Whether this is a safer scenario than well-crafted superhuman-but-very-nonhuman AGI systems is not at all clear.

7. Coherent Extrapolated Volition
This is an idea of Eliezer Yudkowsky’s: Create a very smart AGI whose goal is to figure out "what the human race would want if it were as good as it wants to be" (very roughly speaking: see here for details).

Aside from practical difficulties, it’s not clear that this is well-defined or well-definable.

Singularity Summit 2009 - Photo credit: SingularityU

8. Individual Extrapolated Volition
Have a smart AGI figure out "what Ben Goertzel (the author of this post) would want if he were as good as he wants to be" and then adopt this as its goal system. (Or, substitute any other reasonably rational and benevolent person for Ben Goertzel if you really must….)

This seems easier to define than Coherent Extrapolated Volition, and might lead to a reasonably good outcome so long as the individual chosen is not a psychopath or religious zealot or similar.

9. Make a machine that puts everyone in their personal dream world
If a machine were created to put everyone in their own simulated reality, then we could all live our our days blissfully and semi-solipsistically until the aliens come to Earth and pull the plug.

10. Engineer a very powerful nonhuman AGI that has a beneficial goal system
Of course this is difficult to do, but if it succeeds that’s certainly the most straightforward option. Opinions differ on how difficult this will be. I have my own opinions that I’ve published elsewhere (I think I probably know how to do it), but I won’t digress onto that here.

11. Let humanity die the good death
Nietzsche opined that part of living a good life is dying a good death. You can apply this to species as well as individuals. What would a good death for humanity look like? Perhaps a gradual transcension: let humans’ intelligence increase by 20% per year, for example, so that after a few decades they become so intelligent they merge into the overmind along with the AGI systems (and perhaps the alien mind field!)…

Transcend too fast and you’re just dying and being replaced by a higher mind; transcend slowly enough and you feel yourself ascend to godhood and become one with the intelligent cosmos. There are worse ways to bite it.

%d bloggers like this: