Perhaps, as Prof. Stephen Hawking thinks, it may be difficult to “control” Artificial Intelligence (AI) in the long term. But perhaps we shouldn’t “control” the long-term development of AI, because that would be like preventing a child from becoming an adult, and that child is you.
“Success in creating [Artificial Intelligence] AI would be the biggest event in human history,” say Stehpen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek, in an article published on The Independent. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
“Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains,” continue the scientists. “[A]s Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a ‘singularity‘ and Johnny Depp’s movie character calls ‘transcendence‘.”
I totally agree. What Good said in 1965 is:
“It is more probable than not [that] an ultraintelligent machine will be built and that it will be the last invention that man need make, since it will lead to an ‘intelligence explosion.’ This will transform society in an unimaginable way.” (Irving Good, Speculations Concerning the First Ultraintelligent Machine, Advances in Computers, 1965)
The scientists emphasize the last part of the quote – AI will transform society in an unimaginable way: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” and conclude that “all of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”
I agree so far. But Hawking and the other scientists worry that “the long-term impact [of AI] depends on whether it can be controlled at all.” I wish to respectfully suggest that perhaps the long-term development of AI should not be controlled at all, because “controlling” it would be like preventing a child from becoming an adult, and that child is you.
A real AI, one who thinks and feels like a person, perhaps much smarter than you and I, is a person. Controlling persons is bad.
OK, sometimes controlling some persons is the only way to protect many other persons – we don’t let condemned serial killers walk in the street. But control – forcing people to comply and obey, forcing others to do what they don’t want to do, reducing the freedom of peaceful persons who don’t wish to harm others – is BAD, no caveats. It can only be accepted as a necessary evil when it’s the only way to protect others.
Is this the case with real, strong AIs (human or more-than-human level minds)? Is controlling them the only way to protect us? But wait a sec, who is “them,” and who is “us”?
In Human Purpose and Transhuman Potential: A Cosmic Vision for Our Future Evolution, Ted Chu argues that we need a “Cosmic View” – a new, heroic cosmic faith for the post-human era. Chu believes that we should create a new wave of “Cosmic Beings,” artificial intelligences and synthetic life forms, and pass the baton of cosmic evolution to them. The Cosmic Beings will move to the stars and ignite the universe with hyper-intelligent life. Creating our successors isn’t betraying humanity and nature but, on the contrary, a necessary continuation of our evolutionary journey and an act of deep respect, to the point of “extreme worship,” for humanity, evolution, and nature.
Post-biological Cosmic Beings, our awesome AI mind children, will represent the next self-directed phase of our cosmic evolution. Stephen Hawking himself understands that the human species has entered a new stage of evolution. He thinks that we will probably reach out to the stars and colonize other planets. But this will be done, he believes, with intelligent machines based on mechanical and electronic components, rather than macromolecules, which could eventually replace DNA based life, just as DNA may have replaced an earlier form of life.
Paul Davies, a British-born theoretical physicist, cosmologist, astrobiologist and Director of the Beyond Center for Fundamental Concepts in Science and Co-Director of the Cosmology Initiative at Arizona State University, says in his book The Eerie Silence that any aliens exploring the universe will be AI-empowered machines. Not only are machines better able to endure extended exposure to the conditions of space, but they have the potential to develop intelligence far beyond the capacity of the human brain.
“I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe. If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature.” (Paul Davies)
If post-biological life is the next phase of our evolution, then there is no “us” humans vs. “them” AI – there is only us. We owe our AI mind children the same moral regard that we accord to organic humans, and therefore we will have to let them develop their full potential. Not that we will have a choice – as soon as the AIs become smarter than us, they will be able to avoid our “parental control,” just like our children do when they become smarter than us.
Some fear that AIs will take over and exterminate old-style humans1.0. Hugo de Garisthinks that Artilects, super-human AIs, “once they become hugely superior to human beings, may begin to see us as grossly inferior pests and decide to wipe us out.” (see The first Terran shots against the Cosmists).
But I am persuaded that the AIs will feel no hostility toward old-style humans. The universe is a big place, and they will have other things to do. I am sure that the AIs will be perfectly happy to leave the solar system to old-style humans and move to the stars.
I imagine a co-evolution of humanity and technology, with humans enhanced by synthetic biology and artificial intelligence, and artificial life powered by mind grafts from human uploads, blending more and more until it will be impossible – and pointless – to tell which is which. Like children retain their fundamental identity after growing up and becoming adults, we will grow into post-biological life. We don’t need to fear an AI takeover, because the AIs will be ourselves.