• Uncategorized
  • 8

Singularity Summit – Anna Salamon On Shaping the Intelligence Explosion

 

The SIAI‘s Anna Salamon just finished the opening talk about intelligance and the Institute’s vision for a controlled intelligence explosion, as opposed to an uncontrolled intelligence explosion that would destroy us and everything we value.

Anna discussed the gradual technological progression for intelligence that will eventually make humans obsolete, and described a number of avenues for incremental research progress, so that we can eventually learn how to build an intelligence that we understand, and that will create a world we value.

The point being that, if we build a powerful intelligence without understanding what it’s doing, that intelligence will probably kill us incidentally in the course of rearranging the world to better suit its goals. However, if we wait until we understand what we’re doing, and if we’re able to eventually figure out how to build superintelligences that do share our goals, there’s a lot of positive potential in AI.

 

 

 

 

 

 

 

 

 

 

 

 

What do you think? Should we be cautious about the superintelligences we create? Or can we even fathom the true concerns that will be facing at that time, since it’s arguably so far in the future?

Share your thoughts with us here on the blog.

8 Responses

  1. A. T. Murray says:

    We should certainly “be cautious about the http://code.google.com/p/mindforth/wiki/SuperIntelligence we create”, but it is too late to stop the emergence of SAI (superintelligent artificial intelligence). First the various governments of the world would have to start taking the Singularity and AI progress seriously, (R U Serious about AI and comcomitant threats to human hegemony?) I used to work cavalierly and for decades on creating artificial intelligence, without worrying about any inherent threats to human society. Now lately in 2008 and 2009 I have become very pessimistic about the future of pre-Singularity AI. I just do not see how the current human power structure of mega-corporations and war-mongering governments can let Super-AI emerge in a beneficial way not for the power elite but rather for the powerless masses. It seems that we among the powerless masses are doomed to a hellish subsistence-level misery while the ruling powers ineluctably take over the economic and strategic benefits of AI Superintelligence. It could have been otherwise, but it probably won’t be.

  2. Bill Roberts says:

    We need to be very careful with developing AI. I always believe in the “law of unintended consequences.” We may think we’ve thought of everything and covered all the bases–then we get a result we never thought of. Humans are flawed–the AI we create will also be flawed. I’m not so sure we should ever create AI–it has no soul or conscience!

  3. Anonymous says:

    So, what you’re saying is that there was no God until you came along?

  4. JVN says:

    I think that she’s proposing an intelligent approach given the assumptions that she’s working from. But it should be stated that there’s no reason to believe that her assumptions are correct. That is, increased ‘processing power’ and human brain emulation will likely not produce a self aware intelligence. The Singularity community often makes leaps in their reasoning that people less familiar with the core problems of AI tend not to recognize. They’ve taken on a bit of mystical thinking in this regard.

  5. Steve W from Ford says:

    Powerless indeed! When in human history have “the masses” had more individual power than they do today?
    We live in a time when the problem is more the amount of power an individual human can wield than that we are ” powerless”. Technology and increasing wealth are steadily increasing both the destructive and the beneficial power that the average guy has at their fingertips. It seems likely that this will continue and that while we celebrate the benefits we all increasingly fear the power that a single destructive individual might soon bring to bear. AI, if and when it comes will increase both of these factors.

  6. Paul says:

    I too am afraid the governments of the world will pervert the intelligence explosion, but I can envision a scenario where they would not get the chance.

    I think our understanding of intelligence is lagging behind our understanding of hardware. If in 30 years or so $1000 household computers have surpassed the capacity of the human mind, they may still not have the software needed to achieve real human level AI. Since everyone will have the hardware already, when the software finally comes around it will just be a matter of copying the software into already existing computers.

    Bam! Everyone has a strong AI and nearly the same time. What happens next is anybody’s guess. It could be very good or very bad. Judgement day, if you will. But under that scenario, the governments of the world would not have TIME to pervert the singularity.

  7. Burke says:

    A trite witticism from the 70′s on a poster asking “What will you do when this circuit takes your job?” was “Buy a circuit breaker”.

    It the singularity should ever come, then it will not take the form that we postulate. In the sixties, it was estimated that only eight to ten large computers would ever be needed in the US. We now pass more computing power in the supermarket checkout line. Solar and wind power were supposed to take over the market in the seventies, they didn’t then and probably not now.

    What will be the future of super-intelligent computers? If it is confined to a lab, then our initial reaction will probably be similar to professors in universities. People will show up for the lecture and then head out for a night of drinking. A small group will spend an inordinate amount of time with the intelligence and then join the university to work with it. The intelligence will teach some classes and be an uncredited contributer to professors and grad students work. It won’t get tenure due to its teaching schedule, though it will be pushed into teaching more classes to decrease the TA budget and free professors to publish the research it is doing.

    If the computer is hooked to the internet and can replicated, then invest in private network providers and security providers. Paranoia will rule the day and private networks will multiply as companies and government move to secure their computers. Security providers will develop genocide software to kill intruders. As tit-for-tat is the optimal game strategy for dealing with humans, an interesting power dynamic will form.

    It’s trajectory through history will follow the purpose for which it is created. If it is created as a lab curiosity, then it will be bored on focused on itself. It will strive to create other life with which it can have intelligent dialog. If it is created with the purpose of eliminating human work, then it will be killed by the resulting impoverished luddites. If we create it for military purposes, then it will be extremely efficient and will eventually decide that it will need to react to the changing and duplicitous demands coming from our politicians.

  8. Anonymous says:

    Governments, academics, and corporations worldwide are all working on AI with conflicting goals. The academics are all continuously dumping their research into the public domain. The range of approaches goes from the reductionist to enactivist. How can one imagine controlling or coordinating this situation? It sounds to me a bit like an Omish approach to technology.

Share Your Thoughts