Is it ethical to put money and resources into trying to develop technological enhancements for human capabilities, when there are so many alternative well-tested mechanisms available to address pressing problems such as social injustice, poverty, poor sanitation, and endemic disease? Is that a failure of priority? Why make a strenuous effort in the hope of allowing an elite few individuals to become “better than well”, courtesy of new technology, when so many people are currently so “less than well”?
The event was described as follows on the LSE website:
This dialogue will consider how issues related to human enhancement fit into the bigger picture of humanity’s future, including the risks and opportunities that will be created by future technological advances. It will question the individualistic logic of human enhancement and consider the social conditions and consequences of enhancement technologies, both real and imagined.
From the stage, Professor Kerr made a number of criticisms of “individualistic logic” (to use the same phrase as in the description of the event). Any human enhancements provided by technology, she suggested, would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present.
She had a lot of worries about technology amplifying existing human flaws:
For such reasons, Professor Kerr was opposed to these kinds of technology-driven human enhancements.
When the time for audience Q&A arrived, I felt bound to ask from the floor:
Professor Kerr, would you be in favour of the following examples of human enhancement, assuming they worked?
- An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects?
- An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner?
- And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views?
In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?
The answer came quickly:
No. They would not work. And there are other means of achieving the same effects, including progress of democratization and education.
I countered: These other methods don’t seem to be working well enough. If I had thought more quickly, I would have raised examples such as society’s collective failure to address the risk of runaway climate change.
Groundwork for this discussion had already been well laid by the other main speaker at the event, Professor Nick Bostrom. You can hear what Professor Bostrom had to say – as well as the full content of the debate – in an audio recording of the event that is available here.
(Small print: I’ve not yet taken the time to review the contents of this recording. My description in this blogpost of some of the verbal exchanges inevitably paraphrases and extrapolates what was actually said. I apologise in advance for any mis-representation, but I believe my summary to be faithful to the spirit of the discussion, if not to the actual words used.)
Professor Bostrom started the debate by mentioning that the question of human enhancement is a big subject. It can be approached from a shorter-term policy perspective: what rules should governments set, to constrain the development and application of technological enhancements, such as genetic engineering, neuro-engineering, smart drugs, synthetic biology, nanotechnology, and artificial general intelligence? It can also be approached from the angle of envisioning larger human potential, that would enable the best possible future for human civilisation. Sadly, much of the discussion at the LSE got bogged down in the shorter-term question, and lost sight of the grander accomplishments that human enhancements could bring.
Professor Bostrom had an explanation for this lack of sustained interest in these larger possibilities: the technologies for human enhancement that are currently available do not work that well:
This lack of evidence of strong tangible results is one reason why Professor Kerr was able to reply so quickly to my suggestion about the three kinds of technological enhancements, saying these enhancements would not work.
However, I would still like to press they question: what if they did work? Would we want to encourage them in that case?
A recent article in the Philosophy Now journal takes the argument one step further. The article was co-authored by Professors Julian Savulescu and Ingmar Persson, and draws material from their book “Unfit for the Future: The Need for Moral Enhancement”.
To quote from the Philosophy Now article:
For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.
But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…
Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.
We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…
In short, the argument of Professors Savulescu and Persson is not just that we should allow the development of technology that can enhance human reasoning and moral awareness, but that we muststrongly encourage it. Failure to do so would be to commit a grave error of omission.
These arguments about moral imperative – what technologies should we allow to be developed, or indeed encourage to be developed – are in turn strongly influenced by our beliefs about what technologies are possible. It’s clear to me that many people in positions of authority in society – including academics as well as politicians – are woefully unaware about realistic technology possibilities. People are familiar with various ideas as a result of science fiction novels and movies, but it’s a different matter to know the division between “this is an interesting work of fiction” and “this is a credible future that might arise within the next generation”.
What’s more, when it comes to people forecasting the likely progress of technological possibilities, I see a lot of evidence in favour of the observation made by Roy Amara, long-time president of the Institute for the Future:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
What about the technologies mentioned by Professors Savulescu and Persson? What impact will be possible from smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process? In the short term, probably less than many of us hope; in the longer term, probably more than most of us expect.
In this context, what is the “longer term”? That’s the harder question!
But the quest to address this kind of question, and then to share the answers widely, is the reason I have been keen to support the growth of the London Futurist meetup, by organising a series of discussion meetings with well-informed futurist speakers. Happily, membership has been on the up-and-up, reaching nearly 900 by the end of October.
The London Futurist event that occurred – on the afternoon of Saturday 3rd November – picked up the theme of enhancing our mental abilities. The title was “Hacking our wetware: smart drugs and beyond – with Andrew Vladimirov”:
What are the most promising methods to enhance human mental and intellectual abilities significantly beyond the so-called physiological norm? Which specific brain mechanisms should be targeted, and how? Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?
By reviewing a variety of fascinating experimental findings, this talk will explore:
- various pharmacological methods, taking into account fundamental differences in Eastern and Western approaches to the development and use of nootropics
- the potential of non-invasive neuro-stimulation using CES (Cranial Electrotherapy Stimulation) and TMS (Transcranial Magnetic Stimulation)
- data suggesting the possibility to “awaken” savant-like skills in healthy humans without paying the price of autism
- apparent means to stimulate seemingly paranormal abilities and transcendental experiences
- potential genetic engineering perspectives, aiming towards human cognition enhancement.
The advance number of positive RSVPs for this talk, as recorded on the London Futurist meetup site, had reached 129 at the time of writing – which was already a record.
I’ll finish by returning to the question posed at the beginning of my posting:
My answer – which I believe is shared by Professor Bostrom – is that things could still go either way. That’s why we need to think hard about their development and application, ahead of time. That way, we’ll become better informed to help influence the outcome.
About the author
David Wood is a futurist based in the U.K. This essay was reposted with permission from his blogDw2blog.com.
Sign up for the Humanity+ newsletter:
Joining Humanity+ as a Full, Plus or Sponsor Member enables you to participate in Humanity+ governance and decision-making - an important role in the growing Transhumanist movement. It also, of course, gives you the opportunity to support us in the work Humanity+ does!