Sign In

Remember Me

We Must Evolve

evolution-2

In the intellectual realm we need to utilize technology to augment our intelligence (IA) by any means possible–including education, genetic engineering, biotechnology, and the use of artificial intelligence (AI). The same goes for the moral realm. This would include controversial techniques like implanting moral chips within our brains. But what does making ourselves more moral entail? After reading, teaching, and writing about ethics for almost 30 years, it is clear that the answer to this question is controversial. But I think the essence of morality lies in understanding the benefits of mutual cooperation and the destructiveness of ethical egoism. We need to be cognizant of the nature of the “prisoner’s dilemma (PD),” that we would all do better and none of us would do worse if we all cooperate. Such knowledge would also show the resolution to the multi-person PD that is the “tragedy of the commons.” In this version of the dilemma, each acting in their apparent self-interest brings about disastrous consequences for the rest of us. The effects of situations with the structure of a PD resonates throughout the world in problems as diverse as insufficient public funding, to the threat of environmental disaster and nuclear annihilation.

But of course knowing that we all do better if we all cooperate is undermined by the fact that each does better individually if they do not cooperate regardless of what others do. (At least in the one-time version of the interaction; in the n-person game this solution is not clear.) Hobbes’ solution was coercive governmental power that ensured individuals complied with their agreements. Other solutions include disablement strategies where the non-cooperative move is eliminated. Ulysses having himself tied to the mast of his ship so as not to be seduced by the sirens is an example of disablement. It may be necessary to wire our brains or utilize other technologies so that we cannot not cooperate.

Ideally increasing intelligence and morality would cross-fertilize. As we became more intelligent, we would recognize the rationality of morality.1  We would see that the benefits of mutual cooperation outweigh the benefits of non-cooperation. (This was Hobbes insight, we all do best avoiding the state of nature.) As we became more moral, we would understand the need for greater intelligence to assure our flourishing and survival. We would accept that increased intelligence is indispensable to a good future. Eventually we would reach the higher states of being and consciousness so desired by transhumanists.

###

John G. Messerly, Ph.D taught for many years in both the philosophy and computer science departments at the University of Texas at Austin. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of futurism and the meaning of life at reasonandmeaning.com

3 Comments

  1. Similar to to how in the near future automated cars is going to make driving so much safer than human driving, there will be pressure to make human driving illegal because it causes accidents. So if we could install a morality chip into our brains, then human kind will stop harming each other so much.

  2. Pingback: We Must Evolve

Leave a Reply