Sign In

Remember Me

Wendell Wallach on Machine Morality

Wendell Wallach, a lecturer and consultant at Yale University’s Interdisciplinary Center for Bioethics, has emerged as one of the leading voices on technology and ethics. His 2009 book Moral Machines (co-authored with Colin Allen) provides a solid conceptual framework for understanding the ethical issues related to artificial intelligences and robots, and reviews key perspectives on these issues, always with an attitude of constructive criticism. He also designed the first university course anywhere focused on Machine Ethics, which he has taught several times at Yale.

A few years ago, Wendell invited me to speak in Yale’s technology & ethics seminar series (see the slides from the talk here) – which was a rewarding experience for me, due both to the interesting questions from the audience, and also to the face-to-face dialogues on AI, Singularity, ethics, and consciousness that we shared afterwards. Some of the key points from our discussions are raised in the following interview that I did with Wendell for H+ Magazine:

Ben:

Ray Kurzweil has predicted a technological Singularity around 2045. Max More has asserted that what’s more likely is a progressive, ongoing Surge of improving technologies, without any brief interval of incredibly sudden increase. Which view do you think is more accurate, and why? — and do you think the different really matters (and if so, why)?

Wendell:

I’ve characterized my perspective as that of a “friendly skeptic” – friendly to the can do engineering spirit that animates the development of AI, skeptical that we understand enough about intelligence to create human-like intelligence in the next few decades. We are certainly in the midst of a surge in technological development, and will witness machines that increasingly outstrip human performance in a number of dimensions of intelligence. However, many of our present hypotheses will turn out to be wrong, or the implementation of our better theories will prove extremely difficult. Furthermore, I am not a technological determinist. There are societal, legal, and ethical challenges that will arise to thwart easy progress in developing technologies that are perceived to threaten human rights, the semblance of human equality, and the centrality of humanity in determining its own destiny. Periodic crises in which technology is complicit will moderate the more optimistic belief that there is a technological fix for every challenge.

Ben:

You’ve written a lot about the relation between morality and AI, including a whole book, “Moral Machines.” To start to probe into that topic, I’ll first ask you: How would you define morality, broadly speaking? What about ethics?

Wendell:

Morality is the sensitivity to the needs and concerns of others, and the willingness to often place more importance on those concerns than upon self-interest. Ethics and morality are often used interchangeably, but ethics can also refer to theories about how one determines what is right, good, or just. My vision of a moral machine is a computational system that is sensitive to the moral considerations that impinge upon a challenge, and which factors those considerations into its choices and actions.

Ben:

You’ve written and spoken about the difference between bottom-up and top-down approaches to ethics. Could you briefly elaborate on this, and explain what it means in the context of AI software, including near-term narrow-AI software as well as possible future AI software with high degrees of general intelligence. What do these concepts tell us about the best ways to make moral machines?

Wendell:

The inability of engineers to accurately predict how increasingly autonomous (ro)bots (embodied robots and computer bots within networks) will act when confronted with new challenges and new inputs is necessitating the development of computers that make explicit moral decisions in order to minimize harmful consequences. Initially these computers will evaluate options within very limited contexts.

Top-down refers to an approach where a moral theory for evaluating options is implemented in the system. For example, the Ten Commandments, utilitarianism, or even Asimov’s laws for robots might be implemented as principles used by the (ro)bot to evaluate which course of action is most acceptable. A strength of top-down approaches is that ethical goals are defined broadly to cover countless situations. However, if the goals are defined too broadly or abstractly their application to specific cases is debatable. Also static definitions lead to situational inflexibility.

Bottom-up approaches are inspired by moral development and learning as well as evolutionary psychology. The basic idea is that a system might either evolve moral acumen or go through an educational process where it learns to reason about moral considerations. The strength of bottom-up AI approaches is their ability to dynamically integrate input from many discrete subsystems. The weakness lies in the difficulty in defining a goal for the system. If there are many discrete components in the system, it is also a challenge to get them to function together.

Eventually, we will need artificial moral agents which maintain the dynamic and flexible morality of bottom-up systems that accommodate diverse inputs, while also subjecting the choices and actions to the evaluation of top-down principles that represent ideals we strive to meet. In addition to the ability to reason about moral challenges, moral machines may also require emotions, social skills, a theory of mind, consciousness, empathy, and be embodied in the world with other agents. These supra-rational capabilities will facilitate responding appropriately to challenges within certain domains. Future AI systems that integrate top-down and bottom-up approaches together with supra-rational capabilities will only be possible if we perfect strategies for artificial general intelligence.

Ben:

At what point do you think we have a moral responsibility to the AIs we create? How can we tell when an AI has the properties that mean we have a moral imperative to treat it like a conscious feeling agent rather than a tool?

Wendell:

An agent must have sentience and feel pleasure and pain for us to have obligations to it. Given society’s lack of concern for great apes and other creatures with consciousness and emotions, the bar for being morally responsible to (ro)bots is likely to be set very high.

How one determines that a machine truly has consciousness or somatic emotions will be the tricky part. We are becoming more knowledgeable about the cognitive sensibilities of non-human animals. Future scientists should be able to develop a Turing-like test for sentience for (ro)bots. We should be sensitive about falling into a “slave” mentality in relationship to future intelligent systems. While claims of feeling pain, or demands for freedom, can easily be programmed into a system, it will be important to take those demands and claims seriously if they are accompanied by the capabilities and sensitivities that one might expect from a sentient being.

Ben:

Humans are not really all that moral, when you come down to it. We often can be downright cruel to each other. So I wonder if the best way to make a highly moral AGI system might be not to emulate human intelligence too closely, but rather to make a system with morality more thoroughly at the core. Or do you think this is a flawed notion, and human morality is simply part and parcel of being human — so that to really understand and act in accordance with human morality, an AI would have to essentially be a human or something extremely similar?

Wendell:

While human behavior can be less than admirable, humans do have the capability to be sensitive to a wide array of moral considerations in responding to complicated situations. Our evolutionary and cultural ancestry has bequeathed us with an adaptive toolbox of propensities that compensate for each other. These compensating propensities may fail us at times. But I personally lament the pathologizing of human nature and the aggrandizing of what future (ro)bots might achieve. Our flaws and our strengths are one and the same. We humans are indeed remarkable creatures.

That said, a (ro)bot does not need to emulate a human to be a moral machine. But it does need to demonstrate sensitive to moral considerations if it is going to interact satisfactorily with humans in social contexts or make decisions that affect humanity’s well being. Humans don’t have sonar, but sonar together with light sensors may lead to excellent navigation skills. The speed of computer processing could contribute to moral reasoning abilities that exceeds the bounded morality of human decision making.

The test is whether the moral sensitive of the (ro)bot is satisfactory. (Ro)bots will need to pass some form of a Moral Turing Test, and demonstrate a willingness to compromise, or even suspend, their goals for the good of others.

One question is whether a (ro)bot without somatic emotions, without conscious awareness, or without being embodied in the world with other agents can actually demonstrate such sensitivity. There may be reasons, which are not fully understood today, why we humans have evolved into the kinds of creatures we are. To function successfully in social contexts (ro)bots may need to emulate many more human-like attributes than we presently appreciate.

Ben:

What are your thoughts about consciousness? What is it? Let’s say we build an intelligent computer program that is as smart as a human, or smarter. Would it necessarily be conscious? Could it possibly be conscious? Would its degree and/or type of consciousness depend on its internal structures and dynamics, as well as its behaviors?

Wendell:

There is still a touch of the mystic in my take on consciousness. I have been meditating for 43 years, and I perceive consciousness as having attributes that are ignored in some of the existing theories for building conscious machines. While I dismiss supernatural theories of consciousness and applaud the development of a science of consciousness, that science is still rather young. The human mind/body is more entangled in our world than models of the self-contained machine would suggest. Consciousness is an expression of relationship. In the attempt to capture some of that relational dynamic, philosophers have created concepts such as embodied cognition, intersubjectivity, and enkinaesthetia. There may even be aspects of consciousness that are peculiar to being carbon-based organic creatures.

We already have computers that are smarter than humans in some respects (e.g., mathematics and data-mining), but are certainly not conscious. Future (ro)bots that are smarter than humans may demonstrate functional abilities associated with consciousness. After all, even an amoeba is aware of its environment in a minimal way. But other higher-order capabilities such as being self-aware, feeling empathy, or experiencing transcendent states of mind depend upon being more fully conscious.

I suspect that without somatic emotions or without conscious awareness (ro)bots will fail to interact satisfactorily with humans in complex situations. In other words, without emotional and moral intelligence they will be dumber in some respects. However, if certain abilities can be said to require consciousness, than having the abilities is a demonstration that the agent has a form of consciousness. The degree and/or type of consciousness would depend on its internal structure and dynamics, not merely upon the (ro)bots demonstrating behavior equivalent to that of a human.

These reservations might leave some readers of this interview with the impression that I am dismissive of research on machine conscious. But that is not true. I have been working together with Stan Franklin on his LIDA model for artificial general intelligence. LIDA is a computation model that attempts to capture Bernard Baars’ global workspace theory of consciousness. Together with Franklin and Colin Allen, I have researched how LIDA, or a similar AGI, might make moral decisions and what role consciousness plays in the making of those decisions. That research is published in academic journals and is summarized in chapter 11 of Moral Machines: Teaching Robots Right From Wrong. The latest installment of this line of research will form the lead article for the coming issue of the International Journal of Machine Consciousness.

Ben:

Some theorists like to distinguish two types of consciousness – on the one hand, the “raw consciousness” that panpsychists see everywhere; and on the other hand, the “reflective, deliberative consciousness” that humans have a lot of, mice may have a little of, and rocks seem to have basically none of. What’s your view of this distinction? Meaningful and useful, or not?

Wendell:

The distinction is very useful. My question is how functional will an agent with reflective, deliberative consciousness be if it does not also explicitly tap into “raw consciousness?” If the agent can do everything, it perhaps demonstrates that the notion of a “raw consciousness” is false or an illusion. However, we won’t know without actually building the systems.

Ben:

It seems to me that the existence of such an agent (i.e. an agent with the same functionality as a conscious human, but no apparent role for special “raw consciousness” separate from its mechanisms) is ALSO consistent with the hypothesis of panpsychism — that “raw consciousness” is everywhere in everything (including mechanisms), and just manifested differently in different things…. Agree, or not? Or do you think the discussion becomes a kind of pointless language-game, by this point? ..

Wendell:

Panpsychism comes in many forms. Presuming that consciousness is all pervasive or is an attribute of the fundamental stuff of the universe (a fundamental attribute of matter?) doesn’t necessarily help us in discerning the quality of consciousness in another entity. While certain functional attributes might indicate the likelihood of conscious qualities, the functional attributes do not prove their existence. To my mind consciousness implies a capacity for intersubjectivity not only in the relationship between entities but also in the integration of self. There is no evidence that a rock taps into this. We humans have this capacity, though at times (perhaps much of the time) we also act like unconscious machines. But illuminating why we are conscious or what being conscious tells us about our universe has bedeviled humanity for a few thousand years, and I am skeptical that we will crack this nut in the near future. If an AI was able to perform tasks that were only possible for an entity with the capacity for intersubjectivity, perhaps this would prove it is conscious, or perhaps it would prove that intersubjectivity is an illusion. How would we know the difference?

Ben:

Do you think it’s possible to make a scientific test for whether a given system has “humanlike reflective consciousness” or not (say, an AI program, or for that matter an alien being)? How about an unscientific test — do you think you could somehow tell qualitatively if you were in the presence of a being with humanlike reflective consciousness, versus a “mere machine” without this kind of consciousness?

Wendell:

There are already some intriguing experiments that attempt to establish whether great apes or dolphins have metacognition. Those species do not have the ability to communicate verbally with us as future (ro)bots will. So yes to both questions. As to the accuracy of my own subjective judgment as a tester, that is another matter.

Ben:

I’d love to hear a little more about how you think people could tell qualitatively if they were in the presence of a machine with consciousness versus one without it. Do you think some people would be better at this than others? Do you think certain states of consciousness would help people make this identification better than others?

I’m thinking of Buber’s distinction between I-It and I-You interactions – one could argue that the feeling of an “I-You” interaction can’t be there with a non-conscious entity. But yet it gets subtle, because a young child can definitely have an I-You relationship with their favorite doll. Then you have to argue that the young child is not in the right sort of state of consciousness to accurately discern consciousness from its absence, and one needs to introduce some ontology or theory of states of consciousness….

Wendell:

There are physiological and neurological correlates to conscious states that in and of themselves do not tell us that a person is conscious, but are nevertheless fairly good indications of certain kinds of conscious states. Furthermore, expertise can be developed to discern these correlates. For examples, Paul Ekman and his colleagues have had some success in training people to read microexpressions on faces and thus deduce whether a person is truly happy, repressing emotions, or masking subterfuge and deception. Consciousness is similar. Engagement in the form of eye to eye contact and other perceived and felt cues tells us when a person is interacting with us and when they are distracted. Furthermore, I believe that conscious states are represented by certain energy patterns, that is, conscious attention directs energy as do other states of mind. In addition, there is recent evidence using fMRI for discerning positive and negative responses to questions from people in certain vegetative states. All this suggests that we are just beginning to develop methods for discerning conscious states in other humans. However, translating that understanding to entities built out of plastic and steel, and with entirely different morphologies, will be another matter. Nevertheless, I do believe that some humans are or can learn to be attuned to the conscious states of others, and they will also have insight into when an artificial entity is truly self-reflective or merely simulating a conscious state.

The I-Thou discernment raises a secondary question about mystical sensibilities that may be more common for some people than for others. Perhaps most people can distinguish an I-It from an I-Thou relationship, if only through the quality of response from the other. Most telling is when there is a quality of mutual self-recognition. For some people such a dynamic turns into a falling away of the distinction between self and other. I personally feel that such states are accompanied by both physiological and energetic correlates. That said, perhaps an artificial entity might also be sensitized to such correlates and respond appropriately. However, to date, we do not have sensors to quantitatively prove that these supposedly more mystical or spiritual states exists, so how would we build such sensitivities into an AGI? Even if we could build such sensors into an artificial entity, appropriate responses would not be the same as a full dynamic engagement. An individual sensitive to such states can usually tell when the other is fully engaged or only trying to be consciously engaged. Full engagement is like a subtle dance whose nuances from moment to moment would be extremely difficult, if not impossible, to simulate.

Ben:

As this interesting discussion highlights, there are a lot of unresolved issues around AI, intelligence, consciousness, morality and so forth. What are your thoughts about the morality of going ahead and creating very advanced AGI systems (say, Singularity-capable systems) before these various conceptual issues are resolved? There’s a lot of risk here; but also a lot of promise because these advanced AGI systems could help eliminate a lot of human suffering…

Wendell:

I am less concerned with building AGI systems than with attributing capabilities to these machines that they do not have. Worse yet would be designing a machine that was programmed to think that it is a superior being. We have enough problems with human psychopaths without also having to deal with machine psychopaths. I would not have wanted to give up the past 50 odd years of research on computers and genetics based on 1950’s fears of robot take-overs and giant mutant locust. Given that I perceive the development of AI as somewhat slower than you or Ray Kurzweil, and that I see nothing as being inevitable, at this juncture I would say, “let the research progress.” Kill switches and methods for interrupting the supply chain of potentially pathological systems will suffice for the next decades. Intelligent machines cause me less concern than the question of whether we humans have the intelligence to navigate the future and to monitor and manage complex technologies.

Ben:

Yes, I understand. So overall, what do you think are the greatest areas of concern in regard to intelligent machines and their future development?

Wendell:

There is a little too much attention being given to speculative future possibilities and not enough attention being given to nearer-term ethical, societal, and policy challenges. How we deal with the nearer-term challenges will help determine whether we have put in place foundations for maximizing the promise of AI and minimizing the risks.

6 Comments

  1. http://journalofcosmology.com/Consciousness119.html
    Journal of Cosmology, 2011, Vol. 14.
    JournalofCosmology.com, 2011
    Protophenomena and their Physical Correlates
    Bruce J. MacLennan, Ph.D.1
    I think Maclennan is asking the right questions in the right way.

  2. I think Wendell is great. Loved his book: “Moral Machines” which I used as prelude to tacking a robot ethics experiment onto the end of an R&D project. I hope he won’t mind if I extend (branch-wise) his final comment about focusing on the now (reality) rather than endlessly speculating about the future and listing all the potential problems that might be imagined. It’s actually part of agreeing with him and being stimulated by his comments.

    Artificial Intelligence: Too much talk about the future?

  3. This has all been issued a U.S. patent
    for ethical artificial intelligence entitled:
    Inductive Inference Affective Language Analyzer
    Simulating Artificial Intelligence (patent No. 6,587,846)
    by inventor/author John E. LaMuth M. S.
    As implied in its title, this innovation is the 1st affect-
    ive language analyzer incorporating ethical/motivational
    terms, serving in the role of interactive computer
    interface. It enables a computer to reason and speak in an
    ethical fashion, serving in roles specifying sound human
    judgement: such as public relations or security functions.
    This innovation is formally based on a multi-level
    hierarchy of the traditional groupings of virtues, values,
    and ideals, collectively arranged as subsets within a
    hierarchy of metaperspectives – as partially depicted below.

    Glory–Prudence Honor–Justice
    Providence–Faith Liberty–Hope
    Grace–Beauty Free-will–Truth
    Tranquility–Ecstasy Equality–Bliss

    Dignity–Temperance Integrity–Fortitude
    Civility–Charity Austerity–Decency
    Magnanim.–Goodness Equanimity–Wisdom
    Love–Joy Peace–Harmony

    The systematic organization underlying this ethical
    hierarchy allows for extreme efficiency in programming,
    eliminating much of the associated redundancy, providing
    a precise determination of motivational parameters at
    issue during a given verbal interchange.
    This AI platform is organized as a tandem-nested expert
    system, composed of a primary affective-language analyzer
    overseen by a master control-unit (that coordinates the
    verbal interactions over real time). Through an elaborate
    matching procedure, the precise motivational parameters
    are accurately determined (defined as the passive-monitoring
    mode). This basic determination, in turn, serves as the
    basis for a response repertoire tailored to the computer
    (the true AI simulation mode). This innovation is completely
    novel in its ability to simulate emotionally charged language:
    an achievement that has previously eluded AI researchers due
    to the lack of an adequate model of motivation in general.
    As such, it represents a pure language simulation, effectively
    bypassing many of the limitations plaguing current robotic
    research. Affiliated potential applications extend to the
    roles of switchboard/receptionist and personal
    assistant/companion (in a time-share mode).
    Although only a cursory outline of applications is possible for
    this (90 page) patent, a more detailed treatment is posted at:
    http://www.ethicalvalues.com The direct US Patent link is found at:
    http://patft.uspto.gov/netacgi/nph-Parser?patentnumber=6587846

  4. I don’t care machine morality : man is the input to singularity and man IS MAD

    people who will most likely create the singularity , are those who want more ( in a world where we achieve everything ) people who don’t even acccept : IA is allready better than us in every tasks, and people that cannot define WHAT THEY WANT : define me a peacefull society and let see who is mad !

    J. KRISHNAMURTI: The Future : Computer, Genetics & the Brain : cousciousness or death ! ( you choose : nobody will save you from you : but understand that peace is the futur in every systems … )

    https://singularite.wordpress.com/2011/05/15/krishnamurti-le-future-avec-les-intelligences-artificielles-lautomatisation-de-tout-la-genetique-etc-la-prison-du-spectacle-et-vous-la-conscience-ou-la-mort/

  5. A predictable servant machine should have no no emotions
    What of curiosity?
    then i should say also no motivation so no evolution except at the direction and approval of humans.. it is part of the development of useful scenarios to produce satisfaction of desires. development of scenarios can be done without motivation but by instruction like the case of Watson to allow motives and emotions is akin to suicide . maybe
    Let us say a shitload of chimps with typewriters and forever can type whatever. presume a shitload of chimps with screwdrivers and forever build an sai. and this sai has human level inteligence and motivations. to bad for the chimps as it is too bad for the chimps now in current reality
    When you google how much wood can a wood chuck chuck the computer gives you many possible locations of answers. a powerful sai would do a statistical analysis of possible answers like watson. it doesnt have to be “curious” to produce the effect of being curious which is actually being thorough in researching a subject
    if you ask it what is the true nature of the universe it will learn a lot of things on the way to an answer. in observing what it does it might seem like curiosity
    if you want to know something it means you are curious you have a need to know, it might reward you to know. . you can provide that for the machine and let it do the work. we dont need curious, we have it and can share it.. We have purpose, motives, ethics and emotions. What we need are answers and labor. why do we need to imbue our calculators with them. i cant see a purpose and as i mentioned i do see the potential for danger

    Frustration, anxiety , mental illness, agression, apathy, hate, ethnocentrism, survival, self protection, paranoia
    with an ego a machine could develop the pantheist philosophy (i agree with) which is that this collaboration of atoms is only a long slow predictable reaction and that what came before is only the useless byproducts of that process. Us. We want to be part of the future and not a byproduct and we need to make sure we plan it that way
    if we are to survive the singularity we must be very careful and if base emotions and instincts are programed into an sai it must be very carefully done.
    i dont really see an advantage to recreating the weaknesses inherent in a biological mind. We dont need a buddy or a lover we need a powerful thinking predicting machine to induce and deduce without distractions.
    My point is it will do what its told and nothing else

    The following is a repost but it fits this thread even better than where I put it before.
    EARTH AS AN AIR CONDTIONING PROJECT MANAGED BY STRONG AI
    Changes in the temperature of earth is certainly a much discussed topic. Those who prefer the use of fossil fuel seem to find a way to deny it is even possible for human activity to change the weather. Those who believe the end is near as prophesied by the bible and other feel there is no need to change because the end will come before it is a problem.
    I have coined a word for these people, ostrichcised. So not only is there a culture counter to any profound overture to alter behavior there is a more fundamental problem of hiding from the problem. If we can get past this luddite attitude we still have a couple of problems. One is how would control of the temperature be accomplished and the other is what is the ideal temperature.
    For an advanced ai the means to control the temperature might not be as complex as deciding who gets what. We must also consider humans are not the only interested parties in the decision. Who will decide if the polar bears and penguins must go extinct because suddenly we have the ability to turn the polar regions into sunny beaches. To make matters even more complex would be if it became possible to make the temperature uniform across the globe. What temperature is ideal. Skinny people might want it hotter and fat people might want it cooler. Young men want it cooler and old ladies want it warmer. Post menopausal women are undecided. You might say we should maintain the temperature as it is but that is another can of worms. What does that mean. Maybe “mean” is the operative word. Mean being the average temperature of a given area. That though would leave large segments of the world uninhabitable. Not only that there would be a lot of areas that are prone to severe weather that would like to have that aspect removed. If those aspects such as tornadoes, hurricanes, floods , and droughts are removed it would change the weather somewhere else.
    To make matters worse the sun itself is suspected in at least some of the weather variations and volcanic eruptions can block the sunlight. Obviously there are some serious questions to be answered. Given time there will be serious alterations of the planets weather if left to chance. Eventually we will need to have control of the planets temperature. It might not be much longer before it is possible and not much longer before it is a vital interest.
    Another variable could be that if one country does develop the ability to control the weather in a macro sense and begins to make it more desirable for their general area but to the detriment of other areas. It is even worse if two countries both have the capability but differing objectives.
    Even a super intelligent machine capable of accomplishing this control would not seem to be able to answer these subjective questions. This might be a forbearer to other questions to be answered by humans and I believe the root focus for the future of artificial intelligence. The questions of a more general nature might be “what makes us happy and what do we want”. Will we let the machine make the subjective decisions because humans have opposing views and desires on population control, human misadventure, distribution of wealth and a myriad of other complex issues? Population control might be essential if lifespans are radically extended. Will people be allowed to participate in dangerous hobbies that could cause death or unrepairable damage or will the ai insist on eliminating these things to protect humanity from themselves. If the machine is in control will it decide these things and if it is not then what humans will decide these things as they become possible.
    This is almost as much an issue as determining what is real and what is projected. Whose values will be honored and whose ox will be gored?
    As a rule man’s a fool,
    When it’s hot he wants it cool
    When it’s cool he wants it hot
    Always wanting what it’s not.
    rd hanson