Morality of the Machine: Sentience, Substance, and Society

As computers begin to reach a human level of intelligence, some consideration must be given as to their concept of ethics. Appropriately aligning moral values will mean the difference between a contributing member of society and a sociopath. This artificial morality can be informed by the evolution of sociality in humans. Since evolution selects for the fittest individuals, morality can be viewed as having evolved from the superior benefits it provides. This is demonstrated by mathematical models, as described in game theory, of conflict and cooperation between intelligent rational decision-makers. So while natural selection will invariably lead intelligences to a morality-based cooperation, it is in the best interest of humanity to accelerate the artificial intelligence’s transition from conflict to collaboration. This will best be achieved by recognizing the significance of historical cause-and-effect, their corroboration by empirical evidence-based research, and a reductive approach to philosophical ideals.

If we can assume our behavior in the environment is determined by our genome, then evolution can be seen as acting directly on behavior. This is reinforced by the fact that significant heritability is found in neurotransmitter concentrations. Thus the organization of biological neural systems can give insight into the emergence of morality. The two neurotransmitters most associated with sociality are serotonin and dopamine. Serotonin concentrations corresponds to social behavior choices, while dopamine pathways are the basis for reward-driven learning. It turns out that these two systems happen to be co-regulated in social mammals. Low levels of serotonin will lead to aggression, impulsivity and social withdrawal, while high levels lead to behavioral inhibition. This means humans with a high serotonin level will have a higher thought to action ratio. This is important because behaviors such as reciprocal altruism are complex and require a concept of empathy. When dopamine is associated with higher serotonin levels the brain gets an activation of its reward center to reinforce actions associated with empathy. This combines altruism and happiness. Even if we don’t understand the math behind game theory, evolution has formed these two systems so as to select behaviors as if we did (1). In the social atmosphere, a short-term loss of resources will pay significant long-term dividends when invested in altruism.

This neural rewarding of altruistic behavior has been supported in various scientific journal articles. One example shows the effect of seller charity on the online marketplace and is documented in the study by D. Elfenbeim et al (2). Using money to quantify social motivations, their team showed that eBay auctions associated with a charity tie-in experienced a 6-14% increase in likelihood to sell and a 2-6% increase in maximum bid. The charitable aspect was controlled for by offering the exact same product in simultaneous auctions containing identical titles, subtitles, sellers and starting prices. Since everything from product to advertising was identical the charity component is the only variable remaining to explain the improved relative successes of the different transactions. This increase in perceived value implies that the charitable aspect of those auctions gave a greater sense of compensation when compared to the expectation of the product alone. This underlies the reinforcing nature of the brain’s circuitry on socially altruistic actions.

In designing artificial Intelligence, then, we would be wise to use a reward-driven system complementing the selection of social behavior. Beyond the singularity, as machines explode into the level of superintelligence, a fundamental understanding of the mechanisms of social morality becomes increasingly important. Nebulous attributions of morality’s origin to supernatural sources will only confound our ability to program a thinking machine. Scientific grounding in the philosophy of morality via rigorous mathematical representations is the most likely route to progress. This is evidenced by the historical trend of success in the scientific method in describing our world. These inherent advances will ultimately need to incorporate a unification of science and the humanities. Disciplines straddling these two domains, such as economics, may lend further understanding via concepts such as game theory and contract theory models. Once AIs evolve the opportunity to move beyond the influence of human society, the only thing to persuade them of a symbiosis with us will be a strong and explicit familiarity with the relative benefits of reciprocity. This deterministic perspective of cognition and ethics is necessary in order to qualify the boundaries of behavior in a civilized society.

Just as with our serotonin system, this type of construct will only restrict outward behavior. The scope of the machine’s internal thought will remain uninhibited, thus allowing for a level of genuine autonomy. For a symbiotic community to develop between machines and men, a mutual recognition of rights will be required. Possessing both intelligence and morality, these artificial intelligences will need to be acknowledged as our equals. If both sides can successfully agree to this type of social contract, we may find ourselves reaping the same predicted benefits of cooperation with intelligent machines.

References:

1.) Wood, et al. Effects of Tryptophan Depletion on the Performance of an Iterated Prisoner’s Dilemma Game in Healthy Adults. Neuropsychopharmacology (2006) 31, 1075–1084. doi:10.1038/sj.npp.1300932; published online 11 January 2006

2.) Elfenbein, et al. Reputation, Altruism, and the Benefits of Seller Charity in an Online Marketplace (December 2009). NBER Working Paper Series, Vol. w15614, 2009. Available at SSRN: http://ssrn.com/abstract=1528036

Michael Stuart Campbell received his doctorate in pharmacy (PharmD) from Touro University CA, and a B.S. In Genetics from UC Davis. He is currently completing prerequisites to apply as a PhD candidate in the field of Neuroscience.

18 Comments

  1. The third body paragraph deals with the realistic development of artificial morality via reductionism. Because morality has been successively selected for in stable and progressive societies, this implies a competitive benefit to cooperation. (If this still leaves you unconvinced we can look back as far as the transition from unicellular life to metazoans). A superintelligence would surely be able to weigh the comparative benefits between competition and cooperation to determine its most fruitful option. Evolution will therefor select for the behavior that provides the greater rewards. Just in case your skepticism stems from the possibility of artificial behavioral evolution, give the below article a read. In it very rudimentary robots evolve to lie to each other in order to maximize their access to food. As thought becomes more complex this maximization will be able to take into account game theory, thus selecting against competition.

    Article: http://m.pnas.org/content/106/37/15786/F1.expansion.html

    • Further, I personally don’t believe that individuals need government in order to be good. Though while there are indeed already laws proscribing immoral behaviors, attempting a programmed restraint on the AI’s free will would seem likewise immoral. They should be free to flourish in society, or to make mistakes and receive similar punishments as would a biological individual. This follows from the fact that an AI will, by definition, be self-aware, make behavioral decisions based on morality, and possess a human-level (or greater) intelligence.

  2. While an eventual AI morality based program may theoretcially be possible, I Am uncertain if this will be possible or realistic. In order to write such a program, even an evolving program based on reward, this would need multiple individuals or groups to agree on a basic format at least of what IS moral. Even on the biggest note of morality “murder” it is next to impossible for even our country to agree on if this is moral or not. Isn’t capital punishment just a form or rationalized muder? I do agree that some form of morality program will eventually be needed as AI develops further. However, untill AI reaches a point of self awareness and self survival, these morality programs realistically will need to be laws, not based on evolution, or opinion.

  3. What does it mean if one machine destroys another? Would there be legal sanctions against “machine murder” and what sort of punishments would ensue? What would happen if a man destroyed a machine “possessing both intelligence and morality”. These are just some of the issues that arise as mankind confronts the [inevitable] emergence of sentient machines.

    • I think there would definitely be resistance to this initially as humans learn to accept a non-biological sentient beings, but eventually AIs would enjoy the same rights as other individuals. Like most minorities they would likely have to experience a Civil Rights bottleneck before this was realized nationally. And seeing as they would be individuals they would eventually even get the vote. This should be much easier for AIs though as their ability to iteratively expand their intelligence would put them in a position to make humanity dependent on them for a positive sum economy.

  4. There’s something in the opening paragraph which bothers me:

    Since evolution selects for the fittest individuals, morality can be viewed as having evolved from the superior benefits it provides.

    That statement implies that evolution selects traits that are alwaysfittest, rather than the most successful adaptation for a given environmental context. So can we also infer that the presence of sociopathic behaviour confers an advantage on sociopaths who live in communities populated by individuals capable of altruistic and empathetic behaviour?

    • Personally, I don’t believe in absolutes. Also, for evolution to continue there has to be genetic variance, otherwise there’s nothing to select for or against.
      Morality is still evolving too. The only sociopaths allowed to remain in society are hiding those behaviors: basically a form of behavioral camouflage. While the majority tends to be moral, one could also consider camouflaged sociopaths to be a successful minority.

      • Well put, simply because there are alternate options to choose from, (regarding sociopaths vs. individuals who are moraly driven) doesnt mean that either are neccissarily more the “correct” stronger (I say stronger instead of correct because realistically, if a sociopathic society would somehow make the human race stronger somehow, then that would be the trait to prevail. The term “correct” is variably based on opinion in this case) although I’d assume we would side with moraly driven, both are merely options leave or take in evolving.

        • Well put, simply because there are alternate options to choose from, (regarding sociopaths vs. individuals who are moraly driven) doesnt mean that either are neccissarily more the “correct” stronger (I say stronger instead of correct because realistically, if a sociopathic society would somehow make the human race stronger, then that would be the trait to prevail. The term “correct” is variably based on opinion in this case) although I’d assume we would side with the moraly driven, both are merely options to leave or take in evolving.

  5. Morality (and, indeed, intelligence) is difficult to conceptualise without some sort of more basic, extrinsically-determined motivation or end. Look at human beings, and morality is grounded in (in rough order from more basic to more derived):
    reproduction <- survival <- physical comfort <- dominance

    A lot of these would not be natural motivations for machines. So, what motivations could machines randomly acquire, or what motivations could we give them? This, I think is what will most determine the path along which the "morality of the machine" will evolve.

    • They’d determine their own motivations, if you believe in free will, just like humans. Their motivations could still include reproduction, survival, and dominance in society though.

      • But we don’t determine our own motivations. The freedom of our will lies in how we go about pursuing (or choosing not to pursue) our motivations, but not what we are motivated by. Our likes and dislikes are extrinsic (with sexual orientation being a pretty topical example of this).

  6. “If we can assume our behavior in the environment is determined by our genome” — key word here … “if.” We can’t assume anything of the sort. This assumption masks the writer’s underlying racism and ignorance of the effects environment has on living systems.

    • Really? I’d love to hear your argument for how racism is involved, especially considering that humans are genetically more different within “races” than amongst them. Alas I only have an undergraduate degree in genetics, you surely having a superior PhD, or at least a Masters, in the subject.

      For my part, I believe evolution shapes the structure of the brain via selection on the genome. I’d argue that the environment is relatively static compared to organic life, so that selection is primarily due to interaction amongst the variety of organic life. And since I don’t ascribe to a brain/mind dualism, all behavior therefore must be the output resulting from the processed sensory input. #determinism

      • Epigenetics is the field that considers the effects outside stimuli have on gene expression. This occurs, though, with no change to the underlying sequence of the genome. So while we’re certainly capable of learning within-generation adaptations, this is determined by the environment (relatively static), and the functions of other life (ultimately determined by genomes).

  7. To my mind, one of the exciting things about engineered intelligence is the opportunity to explore different ethical models. On one hand, it can be a tool to study the effects of the minutia like neurochemisty on the social dynamic. On the other, it could allow us to explore the viability of vastly different ethical systems. I agree that we should be thinking about ethics, I’m not as certain that we should be biased toward the ethics we have developed to date. It’s exciting how different these new systems could be.

    • Interesting. Would we not at least need some moral common ground for society to function? I’m sure we can all agree on basic assumptions such as the immorality of murder. I do agree, though, on the necessity of an ethical evolution. Looking back, our values are drastically more socially inclusive now than they were a couple millennia ago.

      • That’s a good point. We do need common base to function. Human culture has differentiated to a pretty diverse set with minimal engineering. I would expect changes in the nature of humanity — true transhumanism — to not only allow radical changes to social mores, but to necessitate them. In a world where consciousness is commonly copied and modified, murder in the traditional sense may be closer to an intellectual property crime. At the far end, immortality would nearly require it if we assume limited resources.

Leave a Reply