• Uncategorized
  • 8

Singularity Summit – Anna Salamon On Shaping the Intelligence Explosion

 

The SIAI‘s Anna Salamon just finished the opening talk about intelligance and the Institute’s vision for a controlled intelligence explosion, as opposed to an uncontrolled intelligence explosion that would destroy us and everything we value.

Anna discussed the gradual technological progression for intelligence that will eventually make humans obsolete, and described a number of avenues for incremental research progress, so that we can eventually learn how to build an intelligence that we understand, and that will create a world we value.

The point being that, if we build a powerful intelligence without understanding what it’s doing, that intelligence will probably kill us incidentally in the course of rearranging the world to better suit its goals. However, if we wait until we understand what we’re doing, and if we’re able to eventually figure out how to build superintelligences that do share our goals, there’s a lot of positive potential in AI.

 

 

 

 

 

 

 

 

 

 

 

 

What do you think? Should we be cautious about the superintelligences we create? Or can we even fathom the true concerns that will be facing at that time, since it’s arguably so far in the future?

Share your thoughts with us here on the blog.

Leave a Reply

https://phuonghoangschool.com/wp-includes/nexus-slot/