H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Ben Goertzel
October 7, 2009


Ben GoertzelI just returned home from the 2009 Singularity Summit in New York, which was an extremely successful one: nearly 800 attendees, a diverse speaker list, and an overwhelming amount of interesting discussion. The 2008 Summit had a lot of great stuff including Intel CTO Justin Rattner on his firm's potential role in the Singularity, and Dharmendra Modha from IBM talking about their DARPA-funded brainemulation project -- but this year's Summit broadened the focus,including many newcomers to the Singularity scene such as SF writer Gregory Benford talking about his work at Genescient creating longevity remedies via studying long-lived "Methuselah flies", Ned Seeman on DNA computing, Stuart Hameroff and Michael Nielsen on quantum computation and its potential importance, and Stephen Wolfram on how Wolfram Alpha fits into the big picture of accelerating technological change. All in all this year's Summit was a bit more hard-sciency than the previous ones, and I fully approve this shift. It is after all science and technology that are (potentially) moving us toward Singularity.

After the Summit itself there was a 1.5-day workshop involving many of the Summit speakers, along with a handful of other "thought leaders." This was more a "discussion group" than a formal workshop, and the talk ranged far and wide, including topics both intergalactically speculative and down-to-Earth. What I'm going to report here is one of the more out-there and speculative discussions which I was involved in during the workshop -- not because it was the most profoundly conclusive chat we had, but rather because I found the the conversation fun and thought-provoking and I think others may agree...

The Singularity Summit 2009

The topic of the discussion was "How to Avoid Extremely Bad Outcomes" (in the Singularity context). The discussion got deep and complex but here I'll just summarize the main possible solutions we covered.

Surely it's not a complete list but I think it's an interesting one. The items are listed in no particular order. Note that some of the solutions involve nonstandard interpretations of "not extremely bad"!

Of course, this list is presented not in the spirit of advocacy (I'm not saying I think all these would be great outcomes in my ownpersonal view), but more in the spirit of free-wheeling brainstorming.

(Also: many of these ideas have been explored in science fiction in various ways, but giving all the relevant references would n-tuple the length of this article, so they've been omitted!)

1. Human-enforced fascism
This one is fairly obvious. A sufficiently powerful dictatorship could prevent ongoing technological development, thus averting a negative Singularity. This is a case of a "very bad outcome" that prevents an "extremely bad outcome."

The Singularity Summit 2009 - Photo credit: SingularityU2. "Friendly" AGI fascism
One "problem" with human-enforced fascism is that it tends to get overthrown eventually. Perhaps sufficiently powerful technology in the hands of the enforcers can avert this, but it's not obvious, because often fascist states collapse due to conflicts among those at the top. A "Guardian" AGI system with intelligence, say, 3x human level -- and a stable goal system and architecture -- might be able to better enforce a stable social order than human beings.

3. AGI and/or upload panspermia
Send spacecraft containing AGIs or human uploads throughout the galaxy (and beyond). That way if the Earth gets blown up, our whole legacy isn't gone.

4. Virtual world AGI sandbox
Create an AI system that lives in a virtual world that it thinks is the real world. If it doesn't do anything too nasty, let it out (or leave it in there and let it discover things for us). Of course this is not foolproof, but that doesn't make it worthless.



5. Build an oracular question-answering AGI system, not an autonomous AGI agent
If you build an AGI whose only motive is to answer human questions, it's not likely to take over the world or do anything else really nasty.

Among the downsides are that humans may ask it how to do nasty things, including how to make AGIs that are more autonomous or more proactive about serving certain human ends.

6. Create upgraded human uploads or brain-enhanced humans first
If we enhance biological or uploaded human minds, maybe we'll create smarter beings that can figure out more about the universe than us, including how to create smart and beneficial AGI systems.

The big downside is, these enhanced human minds may behave in nasty ways, as they're stuck with human motivational and emotional systems (pretty much by definition: otherwise they're not humans anymore). Whether this is a safer scenario than well-crafted superhuman-but-very-nonhuman AGI systems is not at all clear.

7. Coherent Extrapolated Volition
This is an idea of Eliezer Yudkowsky's: Create a very smart AGI whose goal is to figure out "what the human race would want if it were as good as it wants to be" (very roughly speaking: see here for details).

Aside from practical difficulties, it's not clear that this is well-defined or well-definable.

Singularity Summit 2009 - Photo credit: SingularityU

8. Individual Extrapolated Volition
Have a smart AGI figure out "what Ben Goertzel (the author of this post) would want if he were as good as he wants to be" and then adopt this as its goal system. (Or, substitute any other reasonably rational and benevolent person for Ben Goertzel if you really must....)

This seems easier to define than Coherent Extrapolated Volition, and might lead to a reasonably good outcome so long as the individual chosen is not a psychopath or religious zealot or similar.

9. Make a machine that puts everyone in their personal dream world
If a machine were created to put everyone in their own simulated reality, then we could all live our our days blissfully and semi-solipsistically until the aliens come to Earth and pull the plug.

10. Engineer a very powerful nonhuman AGI that has a beneficial goal system
Of course this is difficult to do, but if it succeeds that's certainly the most straightforward option. Opinions differ on how difficult this will be. I have my own opinions that I've published elsewhere (I think I probably know how to do it), but I won't digress onto that here.

11. Let humanity die the good death
Nietzsche opined that part of living a good life is dying a good death. You can apply this to species as well as individuals. What would a good death for humanity look like? Perhaps a gradual transcension: let humans' intelligence increase by 20% per year, for example, so that after a few decades they become so intelligent they merge into the overmind along with the AGI systems (and perhaps the alien mind field!)...

Transcend too fast and you're just dying and being replaced by a higher mind; transcend slowly enough and you feel yourself ascend to godhood and become one with the intelligent cosmos. There are worse ways to bite it.

24 Comments

    The underlying presupposition in all this is that it is possible for us to engineer something "good". This is based on the premise that we are inherently good and able to distinguish "good" and "bad" with adequate certainty.

    I'm unconvinced.

    First, the philosophical enterprise of the last 2-3 thousand years has entirely failed to provide an adequate basis for making objectively "good" decisions. As civilization becomes more complex, human beings are devolving into hedonistic pleasure seekers who stop at nothing to get there next "fix" of "feel-good." This is what has led to the collapse of every major civilization on earth including the utopian dreams of socialism, the decadence of capitalism, and the deviancy of the free love generation.

    Each day I am more and more convinced that the moral fabric of humanity is merely tattered threads that continue to unravel with each passing hour.

    If we are building grand hopes upon such a foundation, it is doomed to fail with catastrophic results. NOTHING objectively good can come of it.

    I think most solutions are naive (1, 2, 4, 5, 6, 7, 8, 9, 10). Even if you would enforce some kind of rule on future AGI in the US, you will not be able to do it globally. That means there may be someone that will not follow the rules, and then you may lose anyway.

    It is like the question whether the singularity will be a soft take-off or a hard. If it can be hard, it probably will, as the hard take-off overrides the soft.

    Farewell Humans! Maybe some will be kept as revered pets by artificial intelligence in a biosphere preserve along with other remaining species. A legacy for the ages...

    I'm a little confused.

    When the singularity occurs won't we have created something whose thought processes by definition will be incomprehensible to us?

    How on Earth can we ascribe good or bad motives to an entity if we can have no way of comprehending it's motives at all?

    Maybe I'm just showing that I'm a n00b and all of this has already been thoroughly discussed, but frankly the presumption that you are going to ensure good intentions in a thing whose intentions you can't even comprehend by definition seems a little bizzare.

    Too be honest, I don't worry too much about AI because in my personal opinion 6 will be a prerequisite for AI at the human level. We will have to break the brain down not only to the point of reproducibility but to the point where we can understand exactly why and how people think in the various ways which they do. Once we have fully understood how to reproduce a fully cognizant person in silica (or more likely in carbon) we will understand enough to design an AI which will think and act exactly human, but which could be freed from the inherent Alpha dominance model of our genetic instincts.

    Every scenario of AI takeover is based on that simple premise. We fear an AI will possess the exact same instinctual drive to prove superiority over competing rivals for passing on our genes. This is a genetic trait geared towards the survival of our personal genes, and as such, has no bearing on actual AI. A superintelligent computer has no need to be given an ingrained genetic instinct such as Alpha Dominance, nor compete with humans for sexual reproduction rights. To be blunt, a computer which does have a sex drive is superfluous, and an illogical goal to strive for.

    The very first thing AGI researchers need to do is be honest about basis of all AI takeover fears. Once they can address the fact that "AI will replace us" is a fear based on "my rival will prove more worthy of getting sex than me", they can move on to designing an AI without an inbuilt reproductive goal system.

    Hopefully Mr. Goertzel is following this article. I've been watching this argument go on for far too many years.

    Sounds like Kaliya brought plenty of her own prejudice to the table. White males are all evil, are racist against non-white, and don't want women working with them.
    Because of that, she needs to express her frustration in some online discussion, attacking someone who obviously is not looking for an immature, "I'm always right" bickering contest. I, like someone else up there, noticed that you said, "no one of a visible ethnic group". Are you saying that diversity can only be found if someone looks different than someone else? I do believe that, in itself, is racist, don't you? I'd have to say that an 18 year old, white, Russian male will probably have much different views on things than an 18 yr old, white, Italian male. Diversity can vary in more ways than your close-minded, racist, sexist view. I'm not saying you intentionally are being racist and sexist, which it appears you are, but, whether or not it's intentional, it is racism and sexism nonetheless.

    4. Virtual world AGI sandbox
    Im all for the sandbox idea, only this would only be done by those that choose to do it. How would you enforce all AGI systems to remain in isolated computers without some form of policies set into place. Unstable experimental AGI should be worked on systems with no Internet access of any kind. A single terminal and no access to robots. All scientist working with AGI systems would be monitored to make sure they are not carrying the code out of the system.

    3-AGI and/or upload panspermia

    This is the only one I sort of agree with, only if the worlds that are "seeded" are germinated with "current" humans not transhumanist or posthumans. Seeding worlds with posthumans would not mean that the human legacy survives but instead humanity's offspring. Also the humans may very well still go on to create the transhumanist and posthumans, therefore you can say that humans are the sperm for even greater intelligence.

    Another reason that suggest worlds should be seeded with humans instead of posthumans is that starting a civilization at or near the singularity could very well also get them wiped out. Starting out with a more human base may give them the needed time to send out there own DNA prior to there singularity, resulting in a cosmic circle of life.

    9. Make a machine that puts everyone in their personal dream world

    Sounds like your calling for the creation of "Digital Heaven" not to be confused with the matrix. No it will be a digital nirvana, where things like crime, poverty, and money will be a thing of the past!
    A virtual synthetic paradise prison for the mind. I view this as a negative singularity in itself. sure we still exist but in a way that seems to geared towards pleasure seeking only.

    10. Engineer a very powerful nonhuman AGI that has a beneficial goal system
    This sounds like a modified version of the "Be friendly to humans problem"

    11. Let humanity die the good death
    Not an option! At it's core humanity could serve as seeds of greater intelligence. Considering the multiplicity of ways humanity could evolve it's intelligence suggest that many positives outcomes could arise from starting with a human base vs starting with a posthuman base for greater intelligence.

    Wolfram Alpha is still lacking a bit.

    "eliza dushku picture"

    Wolfram|Alpha isn't sure how to compute an answer from your input.

    Do we really want a future without Eliza's pictures?

    This entire article is nothing more than a blatant display of self-regarding navel gazing. Your ideal future is one in which everyone is forced by some eternal indestructible tyrranny to be just like you? The sooner you plug yourself into a virtual reality where you can live out your narcissistic fantasies in a sollipsistic little universe entirely on your own and leave the real world to the rest of us, the better.

    Most of these suggest simply preventing a singularity, under the assumption it will, or just might, be bad. A few do attempt to investigate whether it might be bad or good, such as creating a virtual world and trying to "spark" a singularity within that, and seeing what happens.

    However, I cannot see that being remotely safe. As others have described, how long would it take an adult kept in a cage to convince children to let him out? Now think of it more like a bacteria vs. a human, and more still on top of that. "Social engineering" by such creatures will be as equally astounding as their scientific prowess.

    As for a dictatorship, slow progress can continue in said realms, even if it's not guaranteed. And, quite frankly, there's the issue of "better off dead than Red". I'd rather risk a complete wipeout than live a miserable existence. "Imagine an boot stamping on a human face -- forever." No, thanks. And "forever" is a long time during which they'd either rise to transcendence anyway -- or collapse back down and either completely die out (making the whole exercise pointless) or rise back up with more freedom and be back in the same boat we are now.

    The best bet would be to do this in a controlled manner, and hope that some transcended product of humanity wouldn't suddenly decide humans were not only worthless, but should be gotten rid of as wastes of resources.

    We have a purpose for existence -- our conscious minds. Whatever a singularity entity is, it must include that, even if it's greatly expanded. We do not deliberately cause pain to animals, so hopefully this creature won't cause pain to us. And hopefully it will be capable enough that it will never "need to decide to wipe us out for resources". Or pure evil. Or whatever.

    And, quite frankly, if this is already a created universe for "me" or "me and a bunch of others", I want my money back. Eliza is in Hollywood and not right here. >:-(

    In every good scenario there is some sort of AI working, we must be careful or transhumanist are going to become a cult that believes that the messiah is some sort of AI that rules over humanity.
    Best way to prevent a technological oblivion is to start with a global goberment that pushes technological advances and then merge all minds into one. But that is just me.

    The statement in your opening paragraph is very self congratulatory about "the diversity of the speakers list" - one woman, no one of a visible ethnic group and almost no one under the age of 40. You are totally blind to your own lack of diversity.

    Kaliya, you hit the nail on the head. These issues were pointed out repeatedly to the organizers, and their only responses were variations on We couldn't find any qualified women/others, do you want us to include token "mediocrities"? (as if the picks were so shiny that there were no female equivalents thereof) The sole woman speaker, an employee of the Singularity Institute, was picked up after the broohaha -- and the choice tells you something about the venture.

    I wrote about this here, if you're interested:

    Girl Cooties Menace the Singularity!
    http://www.starshipnivan.com/blog/?p=658

    Is It Something in the Water? Or: Me Tarzan, You Ape
    http://www.starshipnivan.com/blog/?p=712

    Racist.

    And sexist.

    You don't care about the ideas or the individuals, you care about appearances.

    The trouble with diversity language is how narrow it's become.

    Intellectual diversity doesn't know race, class, or gender. Twenty-five years of obsession with making sure that representative groups are represented beyond their percentage of social makeup (in the case of ethnicity) or contribution to the field has actually ruined actual diversity.

    I don't care what equipment is between someone's legs. And, to paraphrase Stephen Jay Gould said, race is meaningless in the context of evolution.

    It's also meaningless in the context of intelligence. Systems wired against someone, hegemonically or otherwise, because of race, class, or gender need to be changed or ended. This would include systems that measure quality by such brands of representation.

    I'm an advocate of replacing the word "diversity" with the word "justice."

    "representative groups are represented beyond their percentage of social makeup (in the case of ethnicity) or contribution to the field has actually ruined actual diversity."

    are you kidding me - so you are saying is a good thing for diversity that only white guys are speaking?

    non-white people make up 25% of the voting public in the US. And this conference has a big fat 0% of non-white people speaking. Women make up 50% of the population you have one women speaking so that is 4% of your total number of speakers. Please explain how pointing out these numbers is "pushing beyond the existing social makeup"

    You can go on and on all you want about how it doesn't matter what is "between someone's legs" but in reality there is a fundamental cultural bias against women and non-white people participating in this and many other scientific, engineering and technical fields. Until you and others actually look at the nature of the default culture that makes it so that you think "diversity" is the "intellectual breadth of old white men". Justice means there is more then one gender with one skin color who is seen as having intellectual validity.

    Kaliya, don't waste your breath. People with this outlook are so embedded in their privilege that anything that threatens to dislodge it is "unjust" -- legacy admissions, on the other hand, are fine and dandy. Meanwhile, whenever women audition behind screens or send college or grant applications with names removes, their numbers soar.

    I bet Mr. Robisch holds Iron John ceremonies to empower the oh-so-threatened white middle class men who suffer from unfair competition by those damn Others who keep forgetting their rightful place (furniture).

    Here's the closing paragraph of my article, Is It Something in the Water? Or: Me Tarzan, You Ape (http://www.starshipnivan.com/blog/?p=712):

    "I think that true equality will come when non-white non-males can be as mediocre as white men. And when that time comes, I guarantee you that the quality of mindblowing anthologies won’t budge. In the meantime, we’ll have to make do with the overqualified Others that occasionally squeak past the endless hazing gauntlet – if the stuff in the water doesn’t get us first."

    If you'll actually read the note I wrote, you'll realize that I would never say that "only white men should be speaking." And please don't be so crass.

    The "voting public" is an interesting selection of the population. I didn't make it; you did. The reason I didn't make that selection is that I don't think justice only applies to voters. What I was talking about is very simple; programmatic and institutional responses to the problems of the glass ceiling (which I am well aware is real and have had close experience witnessing and fighting) or Jim Crow politics are run on what are called rubrics. These respond to levels of diversity by quantifying and qualifying them with statistical models.

    What happens when the models actually run as they are meant to (and, by the way, I'm against this kind of approach in general; I favor evaluating the work over assessing the systemic problem all the time--it's like looking at the clock for the entire match instead of actually moving a chess piece) is that when the goals are met--for race, class, gender, age, whatever the categorical designator--then the system is supposed to shut itself down. That's called a success.

    But the system doesn't shut down, and then we get converts like yourself who just refuse to hear anything but their own choirs singing. Any of us who disagree are knuckle-dragging mouth breathers, regardless of the rationality of the argument. You and your cohort Athena wind up treating me like I'm Rush Limbaugh. I detest Rush Limbaugh and his kind, and have in all likelihood done more to fight him than you've done on your best day.

    So please at least show some respect for an idea about progress that doesn't fit your sexist program of bashing Iron John.

    And my name is Dr. Robisch, please. I worked a long time to earn that title. Were you a little more kind, I'd be fine with "Kip."

    Have a nice holiday. Try not to hate too many men this Christmas. We aren't all either juicer Philistines or effeminate pushovers. Some of us actually have defensible intellects.

    I don't think we should bother ourselves with the preservation of the human race. As natural evolution shows us, all species evolved and were replaced. Now, humans are just accelerating the evolution and probably expanding it into other directions than just biology. The next step is enhanced intelligence, better self preservation (elimination of the aging defect) and of course colonization of the Universe. The human race as is has no chance of advancing into those directions. It's time to evolve.
    If the humanity is stupid enough to self-destroy instead of evolve, that's not a big problem. The Earth has at least 1 billion more years to try again, and there are so many other Earths out there. The Universe will get it right eventually.

    Where are the humans in all of these scenarios? AIs or AGIs all over the place, that doesn't sound fun at all.

    Beyond the issue of age and gender diversity that Kaliya mentions, all the opinions seems to be "all of this is good", but look out for the bad outcomes. I got news for you, what we have now is a picture of the bad outcomes, and it seems like your panel was all cheerleaders for the status quo in terms of power and politics.

    If we get out of this well, it will because we remembered the important parts of being human: care and concern for our fellow humans, as well as all the other creatures.

    I could not agree with you any more Soc. Humanity 's destruction is inevitable, yet our acceleration towards this event is illogical. Hopefully, the evolution of intelligence will surpass "dooms day."

    I'm really sorry to hear these vitriolic responses to my email.

    Athena describes me as someone "emboldened in my privilege" threatened somehow by what might dislodge me (or it). This is what's called a presupposition, and in most self-righteous arguments it's what sinks the speaker.

    I grew up in government housing. My father died when I was little, and my mother raised us. I was only the second person in my extended family to go to college and receive an advanced degree. I have never had an inheritance, a trust fund, or any privilege beyond what labor produced. I've worked very hard. I grew up in an Italian/black/Jewish area of New Jersey run by small-time mafia, a place of extreme prejudice.

    I lost my job because I favored intellectual diversity, and the kind of sound rhetoric backed by defensible research that indeed dislodged the power of, in my case, the anthropocentric college professors who were uninterested in my work on the environment.

    Your comments are unfair. I don't say what I say because I am a privileged white male. I'm not. I say them for the opposite reason--because genuine accomplishment can't be manufactured by race, class, and gender agendas. By people of color, women, or the sexually liberated and enlightened, the work has to do the talking, or it's all tokenism.

    We once sought to hire a poet at Purdue University. Among the top candidates was a woman that all of us on the committee had on our short list. Suddenly, in the midst of the hire, someone stood up (there were only four of us in the room) and gave an impassioned speech about how it would be "unconscionable" for us to leave this candidate off of our list--how it was our "duty" to do so in the spirit of diversity. I explained to the bully pulpiteer that I had the candidate on the list because she was a good poet who, on paper, looked like she would work out in a classroom. He was talking me out of her, by making a speech indicating that I needed her on the list because she was an African American woman, which demeaned her and functioned as a racist, sexist agenda.

    When the candidate came to Purdue she said nearly the same thing from the lectern. She was unimpressed with Purdue's strident declarations of "diversity." The proof of the pudding, as they say, is in the eating.

    I want justice, not retribution, or settling some score, or sticking it to the man. I wouldn't want that regardless of the equipment between my legs, because I think with my mind. That has no color, and is for me a multi-gendered organ.

    So back off, Kaliya and Athena. You're attacking the wrong person. I don't need a banner. I have argument, evidence, logic, and time on my side. And on the side of the oppressed. It could be that your passion is doing more harm than good.

    Sad to say Dr. Robish, you're wasting your time responding to Athena. She is a person who expresses her opinions as EDICTS FROM GOD, and anyone who dares to point out that they are OPINIONS is simply a "person with a brain the size of a pigeon" to quote her from her own blog site.

    While sadly, she has some good ideas about scientists needing to take the time to educate the public, her concept of education is by dictate, with the assumption that only she knows the truth, and anything which is not currently possible is by definition impossible. She has unshakable confidence in her superiority over anyone who is not herself, and only seeks to converse with those who will support her opinions. I have yet to see her make one positive statement about transhumanism or transhuman concepts, despite her claims to be transhumanist, choosing instead to dismiss cryogenics as "you're dead, and even if they can restore your body, it's not going to be you", Uploading with "we'll never achieve uploads. all we'll get is poor deluded programs who think they are the person who died" and dismissing human enhancements with "the body and brain are too complex, they'll never be able to change the body without disrupting the mind."

    To see she is also attacking AI research is just par for the course.

    The very idea of transhumanism has a lot of potential if it can be saved from "the pretentious language of the schools," as Ken MacRory once put it. What I hope is that, rather than theorizing the transhuman to the point at which humanities academics turn it Frankensteinian (rather than doing the work in ethics that would give its scientific potential the right human bounds), we contain it and release it at a pace far slower than what the Information Age is used to.

    It seems that patience is a real key to transcendence, and a breakthrough--in cryogenics, in "human nature," or in unshakeable confidence--does its best work when the best of its possibilities are the only ones ethically adopted.

    Whatever troubles Athena or Kaliya might have with certain political issues now, it'll be interesting to see how they feel when we get past the brochure-variety concern with diversity and move on to addressing issues that far exceed that level of social discourse. What it means to be human is of great concern to anyone environmentally conscious, as I try hard (and too often fail) to be.

    Thanks, Valkyrie, for the commiseration.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community