Sign In

Remember Me

Is Artificial Superintelligence Research Ethical?

Recently I interviewed Roman Yampolskiy, Latvian born computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds and artificial intelligence safety. He holds a PhD from the University at Buffalo (2008). He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.

Yampolskiy is an author of some 100 publications, including numerous books. His work is frequently profiled in popular media, such as: the BBC, MSNBC, Yahoo, New Scientist, Alex Jones Radio show and many others.

Is Artificial Superintelligence Research Ethical?

Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. Investigators concerned with the existential risks posed to humankind by the appearance of superintelligence often describe what can be called a Singularity Paradox as their main reason for thinking that humanity might be in danger. Briefly Singularity Paradox could be described as: “Superintelligent machines are feared to be too dumb to possess commonsense.”

Singularity Paradox is easy to understand via some examples. Suppose that scientists succeed in creating a superintelligent machine and order it to “make all people happy”. Complete happiness for humankind is certainly a noble and worthwhile goal, but perhaps we are not considering some unintended consequences of giving such an order. Any human immediately understands what is meant by this request; a non-exhaustive list may include making all people healthy, wealthy, beautiful, talented, giving them loving relationships and novel entertainment. However, many alternative ways of “making all people happy” could be derived by a superintelligent machine. For example: 

A daily cocktail of cocaine, methamphetamine, methylphenidate, nicotine, and 3,4-methylenedioxymethamph-etamine, better known as Ecstasy, may do the trick.

Forced lobotomies for every man, woman and child might also accomplish the same goal.

A simple observation that happy people tend to smile may lead to forced plastic surgeries to affix permanent smiles to all human faces.

An infinite number of other approaches to accomplish universal human happiness could be derived. For a superintelligence the question is simply which one is fastest/cheapest (in terms of computational resources) to implement. Such a machine clearly lacks commonsense, hence the paradox. So is the future Artificial Intelligence dangerous to humankind? 

Roman YampolskiyCertain types of research, such as human cloning, certain medical or psychological experiments on humans, animal (great ape) research, etc. are considered unethical because of their potential detrimental impact on the test subjects and so are either banned or restricted by law. Additionally moratoriums exist on development of dangerous technologies such as chemical, biological and nuclear weapons because of the devastating effects such technologies may exert of the humankind. Similarly I argue that certain types of artificial intelligence research fall under the category of dangerous technologies and should be restricted. Classical AI research in which a computer is taught to automate human behavior in a particular domain such as mail sorting or spellchecking documents is certainly ethical and does not present an existential risk problem to humanity. On the other hand I argue that Artificial General Intelligence (AGI) research should be considered unethical. This follows logically from a number of observations. First, true AGIs will be capable of universal problem solving and recursive self-improvement. Consequently they have potential of outcompeting humans in any domain essentially making humankind unnecessary and so subject to extinction. Additionally, a truly AGI system may possess a type of consciousness comparable to the human type making robot suffering a real possibility and any experiments with AGI unethical for that reason as well.

If AGIs are allowed to develop there will be a direct competition between superintelligent machines and people. Eventually the machines will come to dominate because of their self-improvement capabilities. Alternatively people may decide to give power to the machines since the machines are more capable and less likely to make an error. A similar argument was presented by Ted Kazynsky, aka Unabomber, in his famous manifesto: “It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. ”

To address this problem, the last decade has seen a boom of new subfields of computer science concerned with development of ethics in machines. Machine ethics, computer ethics, robot ethics, ethicALife, machine morals, cyborg ethics, computational ethics, roboethics, robot rights, and artificial morals are just some of the proposals meant to address society’s concerns with safety of ever more advanced machines. Unfortunately the perceived abundance of research in intelligent machine safety is misleading. The great majority of published papers are purely philosophical in nature and do little more than reiterate the need for machine ethics and argue about which set of moral convictions would be the right ones to implement in our artificial progeny: Kantian, Utilitarian, Jewish, etc. However, since ethical norms are not universal, a “correct” ethical code could never be selected over others to the satisfaction of humanity as a whole.

Consequently, because of serious and unmitigated dangers of AGI, I propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI. Research potentially leading to uncontrolled artificial universal general intelligence should be restricted from receiving funding or be subject to complete or partial bans. An exception may be made for development of safety measures and control mechanisms specifically aimed at AGI architectures.

With the survival of humanity on the line, the issues raised by the problem of the Singularity Paradox are too important to put “all our eggs in one basket”. We should not limit our response to any one technique, or an idea from any one scientist or a group of scientists. A large research effort from the scientific community is needed to solve this issue of global importance. Even if there is a relatively small chance that a particular method would succeed in preventing an existential catastrophe it should be explored as long as it is not likely to create significant additional dangers to the human race. After analyzing dozens of solutions from as many scientists, I came to the conclusion that the search is just beginning. I am currently writing a book (Artificial Superintelligence: a Futuristic Approach) devoted to summarizing my findings about the state of the art in this new field of inquire and hope that it will invigorate research into AGI safety.   

In conclusion, we are best to assume that the AGI may present serious risks to humanity’s very existence and to proceed or not to proceed accordingly. Humanity should not put its future in the hands of the machines since it will not be able to take the power back. In general a machine should never be in a position to terminate human life or to make any other non-trivial ethical or moral judgment concerning people. A world run by machines will lead to unpredictable consequences for human culture, lifestyle and overall probability of survival for the humankind. The question raised by Bill Joy: “Will the future need us?” is as important today as ever. “Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided”.

Dr. Roman Yampolskiy is an assistant professor at the University of Louisville, Department of Computer Engineering and Computer Science. His recent research focuses on technological singularity. A graduate of Singularity University Dr. Yampolskiy was also a visiting fellow of Singularity Institute and had his work published in the first academic book devoted to the study of Singularity – “Singularity Hypothesis” and the first special issues of an academic journal devoted to that topic (Journal of Consciousness Studies).

Roman Yampolskiy

Other Papers can be found here: http://cecs.louisville.edu/ry/publications.htm

http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf

http://cecs.louisville.edu/ry/AIsafety.pdf

http://cecs.louisville.edu/ry/Artimetrics.pdf


For more videos of lectures and interviews with thought leaders please
Subscribe to Adam Ford’s YouTube Channel


h+ Magazine Forums Is Artificial Superintelligence Research Ethical?

This topic contains 0 replies, has 1 voice, and was last updated by  Anonymous 1 year, 4 months ago.

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #23314

    Anonymous

    Video Interview: Recently I interviewed Roman Yampolskiy, Latvian born computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds and artificial intelligence safety. He holds a PhD from the University at Buffalo (2008). He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.

    [See the full post at: Is Artificial Superintelligence Research Ethical?]

Viewing 1 post (of 1 total)

You must be logged in to reply to this topic.