The Exponentially Accelerating Progress in Artificial Intelligence Raises Safety Questions

The Exponentially Accelerating Progress in Artificial Intelligence Raises Safety Questions

 Enhance Your MindA day does not go by without a news article reporting some amazing breakthrough in artificial intelligence. In fact progress in AI has been so steady some futurologists, such as Ray Kurzweil, are able to project current trends into the future and anticipate what the headlines of tomorrow will bring us. Let’s look at some relatively recent headlines:

 1997 Deep Blue became the first machine to win a chess match against a reigning world champion (perhaps due to a bug).

 2004 DARPA sponsors a driverless car grand challenge. Technology developed by the participants eventually allows Google to develop a driverless automobile and modify existing transportation laws.

 2005 Honda’s ASIMO humanoid robot is able to walk as fast as a human, delivering trays to customers in a restaurant setting. The same technology is now used in military soldier robots.

 2007 Checkers, the computer learns to play a perfect game in the process opening the door for algorithms capable of searching vast databases of compressed information.

 2011 IBM’s Watson wins Jeopardy against top human champions. It is currently training to provide medical advice to doctors and is capable of mastering any domain of knowledge.

 2012 Google releases its Knowledge Graph, a semantic search knowledge base, widely believed to be the first step to true artificial intelligence.

 2013 Facebook releases Graph Search, a semantic search engine with intimate knowledge about over one billion of Facebook’s users, essentially making it impossible for us to hide anything from the intelligent algorithms.

 2013 BRAIN initiative aimed at reverse engineering the human brain is funded by 3 Billion US dollars by the White House and follows an earlier Billion Euro European initiative to accomplish the same. “It just so happens that the same technology the project will develop … could also be used to make our brains do whatever they want. Wirelessly. From a distance.”

From the above examples, it is easy to see that not only is progress in AI taking place, it is actually accelerating as the technology feeds on itself. While the intent behind the research is usually good, any developed technology could be used for good or evil purposes.

From observing exponential progress in technology Ray Kurzweil was able to make hundreds of detailed predictions for the near and distant future. As early as 1990 he anticipated that among other things we will see between 2010 and 2020 are:

  • Eyeglasses that beam images onto the users’ retinas to produce virtual reality developed. (See Project Glass)
  • Computers featuring “virtual assistant” programs that can help the user with various daily tasks. (see Siri)
  • Cell phones built into clothing and able to project sounds directly into the ears of their users. (See E-textiles)

But his projections for a somewhat distant future are truly breathtaking and scary. Kurzweil anticipates that by the year:

2029 Computers will routinely pass the Turing Test, a measure of how well a machine can pretend to be a human.

2045 The technological singularity occurs as machines surpass people as the smartest life forms and the dominant specie on the planet, and perhaps universe.

If Kurzweil is correct about these long term predictions, as he was correct so many times in the past, it would raise new and sinister issues related to our future in the age of intelligent machines.

Will we survive technological singularity or are we going to see a Terminator like scenario play out? How dangerous are the superintelligent machines going to be? Can we control them? What are the ethical implications of AI research we are conducting today? We may not be able to predict the answers to those questions, but one thing is for sure – AI will change everything and impact everyone. It is the most revolutionary and most interesting discovery we will ever make. It is also potentially the most dangerous as governments, corporations and mad scientists compete to unleash it on the world without much testing or public debate. I am excited to devote my next book to looking for answers to the fundamental questions raised by exponential developments in artificial intelligence and in particular its safety implications.

Dr. Roman Yampolskiy is a computer scientist and a director of Cyber Security Lab at the University of Louisville. His recent research focuses on technological singularity. Dr. Yampolskiy is currently working on a book about the safety implications of the coming technological singularity – “Artificial Superintelligence: a Futuristic Approach.”


You may also like...

12 Responses

  1. Stan says:

    I think Hollywood really gets peoples imaginations going, however AI is just that artificial intelligence and whilst a machine could have it’s own thoughts, those thoughts would be purely logical, to become truly sentient it would need artificial emotion, artificial instincts, artificial drive and ambition and I think we are a long way from that, so yes your machine may be able to come up with answers to life the universe and everything but it still won’t understand what the question is, it’s a fascinating subject though and gaming against a true AI would be fun.

  2. Michael York says:

    The idea of creating an artillect which is
    described as being a trillion trillion times
    as “smart” as a human, is ambiguous.
    This would assume that with access to all
    data, this artificial intelligence would be
    able to determine (I suppose statistically)
    the answers to all the previously unknown
    questions. Where did we come from, why
    are we here, is there a GOD, how was the
    universe formed, and so on. Still, for people
    who demand quantifiable proof, statistical
    data will not suffice. Certain propositions
    can only be verified if all the data used
    is known to be true, as soon as even a
    best guess is introduced the outcome is
    questionable. So will this computer mind
    be able to “know” all things or is it simply
    a master control unit which heuristically
    catalogs, and manipulates all known data?
    There is a difference between consciousness
    and computation..

  3. Daniel says:

    It doesn’t matter how advanced the A.I becomes. sooner or later it will malfunction even if it can repair itself. Remember, the programmed A.I is only as advanced as its programmer wants it to be.

  4. j says:

    Watch the movie on “Colossus The Forban Project” It’s Comming!

  5. Jaded Developer says:

    It will not take nearly as long as 2045 for AI to have a dramatic effect on human life. AI does not have to be smarter then all human beings only a large amount to be a significant impact. A large portion of human beings are simply not very intelligent (Politically incorrect but true). What type of impact would an AI have if it took over the simpler jobs many humans depend on for a living. Humans may only be needed for a small portion to handle the exceptions. One human controlling 10 cash registers or One human running 10 computer controlled lawnmowers from a tablet computer in the truck. All of these interactions could be mastered with an algorithm that would be able to handle 90% of the situations while a human solves the last 10% (the last 10% is very hard to create an algorithm to handle). That situation is not very far off.

  6. What makes me think a “Terminator” scenario isn’t in the near future is, most robots we build have intelligences developed to solve very specific problems (driving, chess, etc.); robots are getting those intelligences as a product of their environment, i.e. because humans require them. Humans and other animals got tendencies like jealousy, need for dominance, egoism, hatred, competitiveness, need for independence, and fear as a product of their natural environment, Earth’s resource-scarce physical landscape, in which machines never had to compete. So machines will never get those legacy evolutionary tendencies that would cause humans to rebel (e.g. resentment). Intelligent machines will have a completely unique set of needs and tendencies, many of which will probably be invisible to us. This will all be wrong if we ever build an exact human brain… but why would we want to do that? Humans are already easy to make.

  7. Bruno Rodrigues says:

    “Artificial Intelligence” is not an adequate expression for what is under development. Once a machine achieves the autonomy of creating its own thoughts it will actually have a genuine intelligence. The artificial intelligence has been reached a long time ago.

  8. “It is the most revolutionary and most interesting discovery we will ever make.”
    I guess that’s a mater of opinion. I believe that the discovery of multidimensional reality and the conquering of space-time to where the 3rd dimension becomes a mere plaything for us would be the most revolutionary.

  9. shagggz says:

    “Eyeglasses that beam images onto the users’ retinas to produce virtual reality developed. (See Project Glass)”

    When I read this, I imagine lasers or a similar light projection technology writing images directly onto the eyeball, not a screen hovering inches away. The latter is far less interesting. Both will soon be antiquated once contact-lens displays take off, anyway.

    • wisefool says:

      His projections were from 2010 to 2020. I know the reference to Project Glass was implying (misleading) it’s already here, but 2020 isn’t.

  1. July 2, 2013

    […] also be used to make our brains do whatever they want. Wirelessly. From a distance.”        Full Article Tags: Artificial intelligence, Google, Human brain, Intelligence amplification, Machine […]

Leave a Reply

buy windows 11 pro test ediyorum