Future Technology: Merger or Trainwreck?

Imagine. You’ve been working for many decades, benefiting from advances in computing. The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness. You look forward to continuing to “merge” your native biological intelligence with the creations of technology. But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour. Is this the kind of thing you can easily hop onto, and incorporate in our own evolution? Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype. Jaan was speaking last month at the Humanity+ UK event which reviewed the film Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil. You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”.

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge. In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”. In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter. The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress. Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect. The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress. It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software. Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference. Computer performance isn’t just dependent on improved hardware. It can, equally, be critically dependent upon finding the right algorithms. And sometimes the emergence of the right algorithm takes the world by surprise. Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem. What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”.

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang. The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence. Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine. The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering. How quickly that can spread around the Internet. Now imagine an author of that malware that is 100 times smarter. Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see. Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms. It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate. At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future. But that’s not a reason for inaction. If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us. (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before. It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands. The explosive yield was expected to be from 4 to 6 Megatons. But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum. As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory. They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus. This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated. Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout. One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5. This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences. For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences. The more powerful our technology becomes, the more drastic the unintended consequences become. Merger or trainwreck? I believe the outcome is still wide open.

David Wood, formerly of Symbian, is a UK futurist and transhumanist. Jaan Tallinn is an Estonian programmer who co-founded Skype and Kazaa.

7 Comments

  1. hy
    What a nice article. And your website is to good.
    Really you worked hard on your website.

    In this writing, readers obtain information on future technologies and learn to what extent modern technology has been developed to assist the growth of human civilization.

    When things are not looking so good, here are some ways to say things suck.
    “Train Wreck” as in “If we do not get this project dependencies supported, we are heading for a train wreck”
    “Bright Future, nuances are better” as in “This new product is going to outsell our flagship product – bright future, better wear shades.”

  2. time and space can only be experienced by the human heart.

    grace paraphrased

    when i was a young man in korea i went to the post exchange and saw a red led calculator that did simple math. i thought this was a fantastic device that would cost a lot back home in america. at the time i had a hard time forking over the twelve dollars for it but i did and when i got it back to the barracks and looked closely i saw i had paid a boat trip for the thing as it was made in my home town of austin by texas instruments.

    since then i have seen the changes and the changes in the perception of time. we are in a hard start if you consider the relation of the mind to time and that it is not consistent. while the young consider technology is crawling i see it evolving at a fantastic rate. it is not possible for me to keep up. i think it is evolving faster than anyone can keep up with. we are in a hard start. the only question is how long is a long time and how long is a short time.

    my great great grandmother was alive when iwas a kid. she came to texas in a covered wagon and when she died there was space travel. she would say the hard start started in the civil war.

  3. Thank you for this article! It was an enjoyable read, and I think that sobering assessments of the current (and possible future) state of transhumanism are a lot more palatable than utopian/cult-ish prescriptions and predictions.

    I don’t buy that the advances in computing are making people smarter, however. People might become more efficient for certain tasks, but they’re also become more dependent upon those technological innovations. If I ask someone a simple question, and they pull out a smartphone to find the answer, the device doesn’t in fact make the person more adept at doing anything other than actually operating that device. I find the ability of people to recite epic poems from memory more impressive than whatever new functions/apps the next iteration of the iPhone brings, for example.

    • >People might become more efficient for certain tasks, but they’re also become more dependent upon those technological innovations.

      People might become more efficient in space exploration, but they’re also become more dependent upon rocket vehicles. Makes sense, huh?

  4. People were never in charge. You don’t choose to merge with a new technology. We only think we’re in charge only in hindsight.

  5. I recomment you read whether AI has been solved at:
    http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=15698

  6. As you said, calculation errors can result in death.
    That’s precisely why we need to improve our calculation power.

    Global warming, WMD proliferation, asteroids, …
    Calculation errors in these domains will be deadly too.

    I recommend you read the what the University of Oxford says about global catastrophic risks:
    http://www.fhi.ox.ac.uk/

Leave a Reply