Fear of Artificial Intelligence is Rather Pathetic

baby cryingImagine a baby crying while its mother leaves the room briefly.

Humanity’s fear of other intellects, be they imagined creatures in the dark, nearby alien species, or super-intelligent Artificial General Intelligence (AGI), is not dissimilar to the pitiable baby. One important difference from a baby is that adults that fear non-human intellects also know directly how powerful intellect can be.

The most popular argument we hear in the media today is that super-intelligence is an existential threat to humanity, i.e. the comments of Elon Musk and Stephen Hawking, and as outlined in Nick Bostrom‘s book Superintelligence.

It is obvious that a computer could be used to control the nuclear missiles of a major nation. Correspondingly such a system could, almost trivially, be given the property of intelligence. Intelligence, in the most prosaic sense, is simply the ability of an agent to effectively adapt its behavior to its environment.

Given a look up table which describes a set of behaviors which are meant to correspond to conditional environment cue, an intelligent agent might change the conditional cue or corresponding action. Please forgive me for using the most primitive example. If the computer control system in charge of the nuclear missiles changes a response to a condition to ‘fire all missiles’ and that condition is met, global nuclear war could result. Despite the possibility that humans could survive an initial barrage, we can safely label such a hypothetical event as an existential threat to humanity. An integrated computer system that can only change its look up table, but is what we could safely call a sub-human intellect. But such a system could easily control a nuclear arsenal and therefor cause an existential threat to humanity.

Claim: A sub-human level AGI (AGI-) is dangerous and a super-human level AGI (AGI+) is not.

Do you fear babies? Then don’t fear AGI-. Should babies fear you? Then don’t fear AGI+.

wargamesNote that in the case of our hypothetical nuclear armed AGI- we neglected to explicate a set of motivations for changing the look up table (LUT).

Let us assume that AGI+ will have a small set of priors we can call its motivational basis (MB). A truly capable AGI+ would extend its MB to a new set known as the derived basis (DB) which are learned extrapolations of the motivational basis.

For sake of simplicity we will only explicate a single rule for the MB: persist.

What?! Isn’t this AI dangerous? Are you terrified yet?

I’d suspect that a MB without something approximating Asimov’s Laws of Robotics (ALR) is disconcerting to most people. I too used to think you should prepend the Three Laws with a 0th law, something along the lines of: Do not violate the following three laws in the process of modifying the self or by creating other things. But I no longer think this is necessary. Further, the simple imperative to persist should not interfere with an existential imperative on humanity’s part.

Several observations lead me to this conclusion:

1) As human beings ascend away from scarcity situations to abundance the things they find fulfilling, correspondingly move from self interest towards social concerns culminating (according to Abraham Maslow) in Self Actualization. This point is essentially echoed by social analysis like Stephen Pinker‘s book The Better Angels of Our Nature. In the example of humans, intelligence and its material rewards relate positively with beneficence. To me this simple relationship will describe the ascent of intelligent agents, not only in sub-human examples but beyond into super-human domains. Take for instance the recent news regarding a factory robot killing a worker in a German auto plant and the miss-identification of people of African decent as gorillas by Google’s image recognition software.

In both cases stop-gap procedures were implemented to eliminate the errors, the robot was shut off and the identification as gorillas was eliminated entirely. However in both cases the long term solution will involve more advanced automation, not less, the robot or its replacements will have greater capacity to detect and avoid human workers and the image classifier will have to be augmented with something like a bias toward detecting more unlabeled classes from the image data; the tag data from web pages isn’t likely to include labels such as “people of African descent” but is relatively likely to contain labels like “gorilla”.

Abundance of mental capacity begets benevolence and beneficence and reduces mistakes.

 2) Crime, war, work related illness, and all sorts of measures of individual existential threats are negatively related to abundance. In the event that AGI+ comes before radical abundance and changes humanity for the better, it is likely that such an entity would choose a symbiotic relationship with humanity. Indeed, AGI+ emerging after abundance drives all the aforementioned measures toward zero, we can expect that humanity will be a far more inviting partner for an AGI+ (even one with only an imperative to persist). Take the example of other species which have become part of human culture because of domestication. In the middle of the anthropocene extinction, chihuahua populations are at an all time high, and species that are preserved for hunting avoid extinction. Crops that we are dependent on will outlive their natural progenitors. Abundance of resources begets beneficence and benevolence and the emergence of AGI+ will be accompanied by a radical abundances of resources beneficial to human life and well being.

3) In the case that an AGI+ is somehow belligerent it would not make sense for it to pose an existential threat to humanity because any failure to extinguish the entire species might result in a Butlerian Jihad of the type described in Frank Herbert’s Dune series. Given how poorly adapted to space humans are, any AGI+ could simply write itself into the NASA network and depart the area. In the case of a belligerent AGI+ emerging after abundance, and correspondingly widespread automation, it would likely be fearful of our capabilities following tipping its hand as an aggressor, so it could either attempt to infiltrate every single critical piece of military and manufacturing equipment or it could much more simply evade us indefinitely and retreat to deep space (that is certainly what Sun Tzu would recommend).

A belligerent AGI+ produced before we have abundance and therefore widespread automation, could not even hope to engage in widespread infiltration because there would be no way to secretly do such a thing without manufacturing robots or something similarly conspicuous. We might be fooled by super realistic androids, but these could easily be detected with appropriate sensors. Again, simple evasion and self-propagation would be the best policy but this could become increasingly difficult if subsequent AGI+ are created and tasked with finding their errant progenitor.

Radical abundance would remove the anthropic existential threat which would severely reduce the necessity of a would be genocidal AGI+’s efforts. The findings of Nash regarding equilibria and the example of symbiotic species, sufficiently demonstrates the superiority of cooperation over competition. Any AGI+ would know this. But an AGI- might not.

This why the fear of an existential threat from AI is  rather pathetic, because it is simply a projection of our lesser natures onto something which will be very unlikely to be as petty and violent and scared as we ourselves are today. Abundance, and therefore capability, beget benevolence and beneficence, not violence and belligerence. AGI- will therefore be more dangerous and less friendly than AGI+. And AGI+ will be so capable it won’t see humans as an existential threat to itself at all. Why would it even engage in a conflict with us?