The Moral Equivalent of War – Steve Fuller

Last December I interviewed Steve Fuller at Warwick University in England. He spoke broadly and very eloquently, I was engaged – not surprisingly – he holds the Auguste Comte Chair in Social Epistemology in the Department of Sociology. I asked him to provide an article for H+ Magazine, and he obliged – so here it is, ‘The Moral Equivalent of War’ (and The Right to Science). Note I will finish editing uploading an interview series with him entitled ‘Humanity 2.0’ at a later stage (watch H+ Magazine for an article on that), but in the mean time I have embedded a couple of snippets of the interview with him along with his article on the Right to Science – I do hope you enjoy 🙂 — Adam Ford

The Right to Science

Steve Fuller

In 1906 the great American pragmatist philosopher William James delivered a public lecture entitled, ‘The Moral Equivalent of War’. James imagined a point in the foreseeable future when states would rationally decide against military options to resolve their differences. While he welcomed this prospect, he also believed that the abolition of warfare would remove an important pretext for people to think beyond their own individual survival and toward some greater end, perhaps one that others might end up enjoying more fully. What then might replace war’s altruistic side?

It is telling that the most famous political speech to adopt James’ title was US President Jimmy Carter’s 1977 call for national energy independence in response to the Arab oil embargo. Carter characterised the battle ahead as really about America’s own ignorance and complacency rather than some Middle Eastern foe. While Carter’s critics pounced on his trademark moralism, they should have looked instead to his training as a nuclear scientist. Historically speaking, nothing can beat a science-led agenda to inspire a long-term, focused shift in a population’s default behaviours. Louis Pasteur perhaps first exploited this point by declaring war on the germs that he had shown lay behind not only human and animal disease but also France’s failing wine and silk industries. Moreover, Richard Nixon’s ‘war on cancer’, first declared in 1971, continues to be prosecuted on the terrain of genomic medicine, even though arguably a much greater impact on the human condition could have been achieved by equipping the ongoing ‘war on poverty’ with comparable resources and resoluteness.

Science’s ability to step in as war’s moral equivalent has less to do with whatever personal authority scientists command than with the universal scope of scientific knowledge claims. Even if today’s science is bound to be superseded, its import potentially bears on everyone’s life. Once that point is understood, it is easy to see how each person could be personally invested in advancing the cause of scientific research. In the heyday of the welfare state, that point was generally understood. Thus, in The Gift Relationship, perhaps the most influential work in British social policy of the past fifty years, Richard Titmuss argued, by analogy with voluntary blood donation, that citizens have a duty to participate as research subjects, but not because of the unlikely event that they might directly benefit from their particular experiment. Rather, citizens should participate because they would have already benefitted from experiments involving their fellow citizens and will continue to benefit similarly in the future.

However, this neat fit between science and altruism has been undermined over the past quarter-century on two main fronts. One stems from the legacy of Nazi Germany, where the duty to participate in research was turned into a vehicle to punish undesirables by studying their behaviour under various ‘extreme conditions’. Indicative of the horrific nature of this research is that even today few are willing to discuss any scientifically interesting results that might have come from it. Indeed, the pendulum has swung the other way. Elaborate research ethics codes enforced by professional scientific bodies and university ‘institutional review boards’ protect both scientist and subject in ways that arguably discourage either from having much to do with the other. Even defenders of today’s ethical guidelines generally concede that had such codes been in place over the past two centuries, science would have progressed at a much slower pace.

The other and more current challenge to the idea that citizens have a duty to participate in research comes from the increasing privatisation of science. If a state today were to require citizen participation in drug trials, as it might jury duty or military service, the most likely beneficiary would be a transnational pharmaceutical firm capable of quickly exploiting the findings for profitable products. What may be needed, then, is not a duty but a right to participate in science. This proposal, advanced by Sarah Chan at the University of Manchester’s Institute for Bioethics, looks like a slight shift in legal language. But it is the difference between science appearing as an obligation and an opportunity for the ordinary citizen. In the latter case, one does not simply wait for scientists to invite willing subjects. Rather, potential subjects are invited to organize themselves and lobby the research community with their specific concerns. The cost of this proposal is that scientists may no longer exert final control over their research agenda, but the benefit is that they can be assured of steady public support for their work.

Steve Fuller holds the Auguste Comte Chair in Social Epistemology at the Department of Sociology, University of Warwick, UK. He has recently published three books on the future of humanity, all with Palgrave Macmillan: Humanity 2.0: What It Means to Be Human Past, Present and Future (2011); Preparing for Life in Humanity 2.0 (2012); The Proactionary Imperative: A Foundation for Transhumanism (with Veronika Lipinska, 2013). The last book develops further the argument in this paper. Website: http://bit.ly/q3GBmi


For more videos of lectures and interviews with thought leaders please
Subscribe to Adam Ford’s YouTube Channel


4 Comments

  1. Despite how crucial science is to the functioning and advancing of society as it exists, I think this article’s content completely misunderstands the values, aspirations, and goals of the typical individual and group (which may have values that go beyond the individual, i.e. religion). I am not convinced that a dutiful citizen, wherever those are, or even one who likes to exercise their rights openly (rather than just takes the path of convenience) represents the majority or even the most productive and useful members of society. Fortunately, there is something going on that is even better than the citizen’s duty and right for pushing science: consumerism and capitalism. If we say that science in its most obvious and widespread form exists as technology – the gadgets and advancing techniques that exist everywhere around us – then we can also say that it is the money that flows from consumers that activate the need for newer and more useful technology. It is the science that facilitates those improvements and especially the theories that facilitate larger improvements into the future. The more that we push people to continue to improve themselves and therefore become more valuable, the more money per person available to be spent to push technology forward. This has the added benefit of prioritizing science based on the citizen’s current and likely future needs. The desire to buy and experience is a drive with so much powerful motivation, that we don’t need to even consider ‘wartime quota’ value systems like duty, sacrifice, discipline, and overt ‘community’ principles. It seems counterintuitive that mild greed and laziness is the best impetus toward facilitating science, but it is the most powerful form of individual participation and we should not dismiss it. Further, the entire spectrum of science study that falls under the consumer dollar can reasonably range from astrophysics to quantum mechanics to genetic studies to substituting the need for test subjects. Besides, what could be a better motivator to practicing scientists than knowing that in the near or far future that their work will be appreciated by the most democratic and influential vote – that of the buying power of the citizenry.

    • I’m not sure how I feel about the Invisible Hand steering scientific progress. The need for new gadgets doesn’t necessarily facilitate funding for important things like research involving mental illness and other medical sorts. Not to mention that the laziness and mild greed of the average Joe is easily swayed by bio-conservative fear tactics- although that’s a different story entirely, because arguably the altruistic path could also steer humanity in that direction.

    • Jer, I read the article differently. If there is a disconnect between “scientist and subject”, it follows that science could progress at a faster pace. To be crass, I think Fuller is asking “dude, can we experiment on you?” This kind of participation in science implies a further democratic level of participation; If people take risks for science they will want to make sure it counts. Would this create a driving force behind science different to, and better than consumerism? I really don’t know, but it is not necessarily the case that it would offset consumer driven science. Probably these driving forces could exist in symbiosis.

Leave a Reply