Eleven Ways to Avoid an Extremely Bad Singularity
I just returned home from the 2009 Singularity Summit in New York, which was an extremely successful one: nearly 800 attendees, a diverse speaker list, and an overwhelming amount of interesting discussion. The 2008 Summit had a lot of great stuff including Intel CTO Justin Rattner on his firm’s potential role in the Singularity, and Dharmendra Modha from IBM talking about their DARPA-funded brainemulation project — but this year’s Summit broadened the focus,including many newcomers to the Singularity scene such as SF writer Gregory Benford talking about his work at Genescient creating longevity remedies via studying long-lived "Methuselah flies", Ned Seeman on DNA computing, Stuart Hameroff and Michael Nielsen on quantum computation and its potential importance, and Stephen Wolfram on how Wolfram Alpha fits into the big picture of accelerating technological change. All in all this year’s Summit was a bit more hard-sciency than the previous ones, and I fully approve this shift. It is after all science and technology that are (potentially) moving us toward Singularity.
After the Summit itself there was a 1.5-day workshop involving many of the Summit speakers, along with a handful of other "thought leaders." This was more a "discussion group" than a formal workshop, and the talk ranged far and wide, including topics both intergalactically speculative and down-to-Earth. What I’m going to report here is one of the more out-there and speculative discussions which I was involved in during the workshop — not because it was the most profoundly conclusive chat we had, but rather because I found the the conversation fun and thought-provoking and I think others may agree…
The topic of the discussion was "How to Avoid Extremely Bad Outcomes" (in the Singularity context). The discussion got deep and complex but here I’ll just summarize the main possible solutions we covered.
Surely it’s not a complete list but I think it’s an interesting one. The items are listed in no particular order. Note that some of the solutions involve nonstandard interpretations of "not extremely bad"!
Of course, this list is presented not in the spirit of advocacy (I’m not saying I think all these would be great outcomes in my ownpersonal view), but more in the spirit of free-wheeling brainstorming.
(Also: many of these ideas have been explored in science fiction in various ways, but giving all the relevant references would n-tuple the length of this article, so they’ve been omitted!)
1. Human-enforced fascism
This one is fairly obvious. A sufficiently powerful dictatorship could prevent ongoing technological development, thus averting a negative Singularity. This is a case of a "very bad outcome" that prevents an "extremely bad outcome."
2. "Friendly" AGI fascism
One "problem" with human-enforced fascism is that it tends to get overthrown eventually. Perhaps sufficiently powerful technology in the hands of the enforcers can avert this, but it’s not obvious, because often fascist states collapse due to conflicts among those at the top. A "Guardian" AGI system with intelligence, say, 3x human level — and a stable goal system and architecture — might be able to better enforce a stable social order than human beings.
3. AGI and/or upload panspermia
Send spacecraft containing AGIs or human uploads throughout the galaxy (and beyond). That way if the Earth gets blown up, our whole legacy isn’t gone.
4. Virtual world AGI sandbox
Create an AI system that lives in a virtual world that it thinks is the real world. If it doesn’t do anything too nasty, let it out (or leave it in there and let it discover things for us). Of course this is not foolproof, but that doesn’t make it worthless.
5. Build an oracular question-answering AGI system, not an autonomous AGI agent
If you build an AGI whose only motive is to answer human questions, it’s not likely to take over the world or do anything else really nasty.
Among the downsides are that humans may ask it how to do nasty things, including how to make AGIs that are more autonomous or more proactive about serving certain human ends.
6. Create upgraded human uploads or brain-enhanced humans first
If we enhance biological or uploaded human minds, maybe we’ll create smarter beings that can figure out more about the universe than us, including how to create smart and beneficial AGI systems.
The big downside is, these enhanced human minds may behave in nasty ways, as they’re stuck with human motivational and emotional systems (pretty much by definition: otherwise they’re not humans anymore). Whether this is a safer scenario than well-crafted superhuman-but-very-nonhuman AGI systems is not at all clear.
7. Coherent Extrapolated Volition
This is an idea of Eliezer Yudkowsky’s: Create a very smart AGI whose goal is to figure out "what the human race would want if it were as good as it wants to be" (very roughly speaking: see here for details).
Aside from practical difficulties, it’s not clear that this is well-defined or well-definable.
8. Individual Extrapolated Volition
Have a smart AGI figure out "what Ben Goertzel (the author of this post) would want if he were as good as he wants to be" and then adopt this as its goal system. (Or, substitute any other reasonably rational and benevolent person for Ben Goertzel if you really must….)
This seems easier to define than Coherent Extrapolated Volition, and might lead to a reasonably good outcome so long as the individual chosen is not a psychopath or religious zealot or similar.
9. Make a machine that puts everyone in their personal dream world
If a machine were created to put everyone in their own simulated reality, then we could all live our our days blissfully and semi-solipsistically until the aliens come to Earth and pull the plug.
10. Engineer a very powerful nonhuman AGI that has a beneficial goal system
Of course this is difficult to do, but if it succeeds that’s certainly the most straightforward option. Opinions differ on how difficult this will be. I have my own opinions that I’ve published elsewhere (I think I probably know how to do it), but I won’t digress onto that here.
11. Let humanity die the good death
Nietzsche opined that part of living a good life is dying a good death. You can apply this to species as well as individuals. What would a good death for humanity look like? Perhaps a gradual transcension: let humans’ intelligence increase by 20% per year, for example, so that after a few decades they become so intelligent they merge into the overmind along with the AGI systems (and perhaps the alien mind field!)…
Transcend too fast and you’re just dying and being replaced by a higher mind; transcend slowly enough and you feel yourself ascend to godhood and become one with the intelligent cosmos. There are worse ways to bite it.
Sounds like Kaliya brought plenty of her own prejudice to the table. White males are all evil, are racist against non-white, and don’t want women working with them.
Because of that, she needs to express her frustration in some online discussion, attacking someone who obviously is not looking for an immature, “I’m always right” bickering contest. I, like someone else up there, noticed that you said, “no one of a visible ethnic group”. Are you saying that diversity can only be found if someone looks different than someone else? I do believe that, in itself, is racist, don’t you? I’d have to say that an 18 year old, white, Russian male will probably have much different views on things than an 18 yr old, white, Italian male. Diversity can vary in more ways than your close-minded, racist, sexist view. I’m not saying you intentionally are being racist and sexist, which it appears you are, but, whether or not it’s intentional, it is racism and sexism nonetheless.
Too be honest, I don’t worry too much about AI because in my personal opinion 6 will be a prerequisite for AI at the human level. We will have to break the brain down not only to the point of reproducibility but to the point where we can understand exactly why and how people think in the various ways which they do. Once we have fully understood how to reproduce a fully cognizant person in silica (or more likely in carbon) we will understand enough to design an AI which will think and act exactly human, but which could be freed from the inherent Alpha dominance model of our genetic instincts.
Every scenario of AI takeover is based on that simple premise. We fear an AI will possess the exact same instinctual drive to prove superiority over competing rivals for passing on our genes. This is a genetic trait geared towards the survival of our personal genes, and as such, has no bearing on actual AI. A superintelligent computer has no need to be given an ingrained genetic instinct such as Alpha Dominance, nor compete with humans for sexual reproduction rights. To be blunt, a computer which does have a sex drive is superfluous, and an illogical goal to strive for.
The very first thing AGI researchers need to do is be honest about basis of all AI takeover fears. Once they can address the fact that “AI will replace us” is a fear based on “my rival will prove more worthy of getting sex than me”, they can move on to designing an AI without an inbuilt reproductive goal system.
Hopefully Mr. Goertzel is following this article. I’ve been watching this argument go on for far too many years.
I’m a little confused.
When the singularity occurs won’t we have created something whose thought processes by definition will be incomprehensible to us?
How on Earth can we ascribe good or bad motives to an entity if we can have no way of comprehending it’s motives at all?
Maybe I’m just showing that I’m a n00b and all of this has already been thoroughly discussed, but frankly the presumption that you are going to ensure good intentions in a thing whose intentions you can’t even comprehend by definition seems a little bizzare.
Farewell Humans! Maybe some will be kept as revered pets by artificial intelligence in a biosphere preserve along with other remaining species. A legacy for the ages…
I think most solutions are naive (1, 2, 4, 5, 6, 7, 8, 9, 10). Even if you would enforce some kind of rule on future AGI in the US, you will not be able to do it globally. That means there may be someone that will not follow the rules, and then you may lose anyway.
It is like the question whether the singularity will be a soft take-off or a hard. If it can be hard, it probably will, as the hard take-off overrides the soft.
The underlying presupposition in all this is that it is possible for us to engineer something “good”. This is based on the premise that we are inherently good and able to distinguish “good” and “bad” with adequate certainty.
I’m unconvinced.
First, the philosophical enterprise of the last 2-3 thousand years has entirely failed to provide an adequate basis for making objectively “good” decisions. As civilization becomes more complex, human beings are devolving into hedonistic pleasure seekers who stop at nothing to get there next “fix” of “feel-good.” This is what has led to the collapse of every major civilization on earth including the utopian dreams of socialism, the decadence of capitalism, and the deviancy of the free love generation.
Each day I am more and more convinced that the moral fabric of humanity is merely tattered threads that continue to unravel with each passing hour.
If we are building grand hopes upon such a foundation, it is doomed to fail with catastrophic results. NOTHING objectively good can come of it.