The recent U.N call for a moratorium on the development of remotely controlled weapon systems has raised some interesting issues and debate. While the tone of the announcement by U.N Special Rapporteur, Christof Heyns, rather dryly refers to LARs or lethal autonomous robots the story was then picked up by numerous news sites with headlines pitting killer robots against the U.N.
The subject of intelligent machines and what they mean for humanity is open to much interpretation but the majority of opinion often seems to suggest a rather dysoptian future. From serious academic papers – War in the age of intelligent machines to widespread blockbuster entertainment –Terminator Salvation – the picture painted is not too rosy. While there are some commentators that suggest machines will be our wise, helpful companions, using their superior brain power simply to ease our lives, this is not the default position.
Once machines can think and fend for themselves, so the argument goes, there will be no more need for humanity. Initially, while these new competitors still rely on us in some capacity, we will be side-lined and out-smarted but once they are truly self-sufficient then our resources will be strangled and we will be pitched into a battle against foes we cannot hope to defeat.
This certainly makes for great drama but how likely is a scenario where mankind is is fighting for survival against intelligent machines? Would self-sufficient machines feel the need to fight us out right and if they did, would some form of genetic or biochemical agent not achieve the aim much quicker and more effectively?
Perhaps instead the machines will opt for a softer form of control. Some articles – Will Super Smart Artificial Intelligences Keep Humans Around As Pets? – have suggested we will be akin to pampered pets – creatures to be cared for and fed, unable to understand the full complexity of what goes on around them but neutered and controllable, not a threat in any way.
These scenarios are both possible but rather polarized in their ideas of the relationship that mankind may have with a superior intelligence. There are over 7 billion humans on the planet and rising so is it likely that we can talk about a one size fits all approach?
If we want to consider how a superior intelligence might treat creatures with lesser intelligence then it is certainly instructive to look at our own behaviour. In fact, it is undoubtedly from this consideration that the two scenarios above have developed. Mankind over the past five hundred years has pampered and bred countless animals while at the same time massacring and confining countless more.
The idea that it is an either/or situation –terminator or brave new world – is patently ridiculous when we consider our own behaviour. We can quite happily dote on a pet and also buy processed meat. We can put a bird feeder in our garden but kill a spider that crawls into the bath.
Of course even this view is too simplistic there are countless animals that are neither pets, prey or food. Foxes live alongside us in our cities largely untouched by humans, birds share the same parks as we do, large numbers of rats live in our sewers. To try to define the human race’s attitude to animals is simply too complex to be done in one simple analogy.
Likewise we might expect that any relationship between ourselves and a higher intelligence in the form of a machine would be at least as complex if not more so.
The parameters that will decide if humans will battle with machines or be pampered by them are undefined at present but it seems sensible to imagine that there will be a spectrum of different responses. This could range from extreme antagonism through indifference to extreme attachment. We interact with animals for a huge range of different reasons why should a machine be any different?
The key issue is what form of intelligence any machines we develop will have. As with any object the parameters are defined by its environment. Similarly the ‘environment’ that we create machines in will determine what intelligence we perceive those machines to have.
It doesn’t take much to realise that if we invest in designing machines to kill then the intelligence they develop will model the world around the idea of destroying certain targets. If we spend similar time developing machines to clean up pollution, cultivate food or write operas then those machines will grow to have different skills and likewise a different form of intelligence.
Too often discussion of intelligent machines, or intelligence in humans for that matter, is clouded by the idea that intelligence is some form of empirical possession. People can have x amount of intelligence or a machine will someday reach a value y that is greater than x. This idea is severely misleading and does not bear close comparison with our own experience or use of the word intelligence.
In a sense every generation of humans is another wave of intelligent machines, that starts from scratch, and the values we imbue on them can clearly be seen to be a mix of good and bad – why should we expect any different from AI?
The fact that intelligent machines in the future may create even greater disparity between those humans that are pampered and those that are hunted should not be any concern – after all a world with inequality is one we have already created on our own without the need for any machines.
Lochlan Bloom lives in London and does not have a cat or a dog. He is a writer of fiction and non-fiction and has completed recent projects for BBC Radio Scotland, H+ Magazine , Ironbox Films and Calliope, the official publication of the Writers’ Special Interest Group (SIG) of American Mensa, amongst others.
The BBC Writersroom describe his writing as ‘unsettling and compelling… vivid, taut and grimly effective work’. He currently has a feature length script in production with Porcelain Film. His novella, Trade, is out now.
For more details visit www.lochlanbloom.com.