At the heart of film maker Barrat’s book is the prophecy of the British mathematician I.J. Good, a colleague of Alan Turing. Good reasoned that once machines became more intelligent than humans, then the machines would design other machines leading to an intelligence explosion which would leave humans far behind. Is this true? What Barrat finds is that almost half of experts in the field expected intelligent machines within 15 years, and a large majority expected them shortly thereafter. Barrett concludes that this intelligence explosion will lead almost immediately to the singularity, although we have no idea what these machines will do.
Eclipse of Man: Human Extinction and the Meaning of Progress, by Charles T Rubin:
The political philosopher Rubin’s book explores the roots of our desire to use technology to alter the human condition. This urge has aided humans greatly in the past, but Rubin believes that technologically-minded idealists regard humanity as a problem. This is a mistake, he believes, and allowing machines to make our decisions is problematic. Instead of improving us, our technology might supplant us; it would be like a hostile alien invader.
Smarter Than Us: The Rise of Machine Intelligence, by Stuart Armstrong:
Armstrong is a fellow of the Future of Humanity Institute at Oxford who has thought hard about how superintelligence could be made to be “friendly.” He argues that it would be difficult to communicate with alien beings that have computer minds. We might ask it to rid the planet of violence, and it would rid the planet of us! The point is that values are hard to explain, since they are based on, among other things, common sense and unstated assumptions. To turn those values into programming code would be extraordinarily challenging, and to avoid catastrophe, we could not make mistakes.
In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, by George Zarkadakis:
Most of our ideas about what it would be like to live with super-intelligences comes from science fiction, says the AI researcher George Zarkadakis. There can be little doubt that science fiction stories and metaphors have influenced us. As a result, we tend to anthropomorphize in order to make sense of our technology. We imagine robots like Schwarzenegger’s Terminator; we imagine robots and super-intelligences with human qualities. But intelligent machines won’t be human, they will not share our evolutionary history, they will not have brains like ours. So who knows their goals and values; who knows how they will regard humans. Perhaps they will have no need for us.
All these books worry that intelligent machines might destroy us, even if only inadvertently. Moreover, many AI researchers aren’t concerned about the problem of creating friendly AIs. In fact, a large part of AI research is and has historically been dedicated to developing robots for war—clearly unfriendly AIs.
Surely things might go wrong if we create autonomous machines that kill humans. All of these authors believe that we should be worried; read their books and decide for yourself.
John G. Messerly, Ph.D taught for many years in both the philosophy and computer science departments at the University of Texas at Austin. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of futurism and the meaning of life at reasonandmeaning.com