"Other animals, which, on account of their interests having been neglected by the insensibility of the ancient jurists, stand degraded into the class of things. … The day has been, I grieve it to say in many places it is not yet past, in which the greater part of the species, under the denomination of slaves, have been treated … upon the same footing as … animals are still. The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may come one day to be recognized, that the number of legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps, the faculty for discourse?…the question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?… The time will come when humanity will extend its mantle over everything which breathes… " from "Introduction to the Principles of Morals and Legislation" – 1789 – Jeremy Bentham (1748 – 1832) Introduction to the Principles of Morals and Legislation
Natural science seeks "a view from nowhere", an impartial, objective perspective that dethrones human beings from the centre of the cosmos. By contrast, Ethics is almost entirely anthropocentric. Recent decades have witnessed a slowly expanding "circle of compassion". But it is a circle with one species at its absolute centre. Can Ethics ever aspire to be a rigorous academic discipline that delivers an impartial perspective embracing the interests of all sentient life: the well being of posthuman, human and non-human animals; hypothetical extra-terrestrial life, future "cyborgs", and artificial life alike? Or will Ethics always serve to rationalize the self-interest of the world’s most powerful lifeforms? Early in the 21st century, have we finally struck more-or-less the right balance between scientific inquiry and animal welfare? Or might we be catastrophically mistaken? Presumably the perspective of any future moral superintelligence won’t be species-specific or parochial: a truly "God’s-eye view" would offer both a universal value system and a decision procedure on when it is morally permissible to kill, experiment on, or cause suffering to other sentient beings for a higher purpose — irrespective of who is making the judgment. A truly universal ethic would constrain hypothetical posthumans tempted to vivisect or recreate humans — or run authentic ancestor simulations. More topically, the adoption of a universal ethic would constrain members of early 21st Century Homo sapiens who seek to vivisect and kill members of other species.
Unfortunately, no such moral clarity was on display in the European Parliament earlier this month. An initial vote passed by EC legislators on May 5, 2009 sets guidelines on the "upper" limit of suffering that humans can lawfully inflict on non-human animals used in scientific experiments. Every year around 12 million non-human animals are used in scientific research in EU nations in the course of commercial product-testing, biomedical research, or open-ended scientific inquiry. Legally permissible levels of suffering inflicted can range from "up to mild" to "moderate" to "severe." Non-human animals are "reusable" if the testing entails up to "moderate" pain. The new EC regulations also cover our closest living relatives, non-human primates. Each year around 10,000 non-human primates are killed in European laboratories. The new parliamentary directive recommends phasing out wild-caught animals in favor of laboratory-bred animals over a 10-year period; and calls for an unspecified overall reduction in the number of non-human primates experimented on.
Legislators, policy-makers and the research community face a dilemma. On the one hand, researchers must justify vivisection on the grounds that its results are highly relevant to the welfare of our own species. Non-human animals are genetically, neurologically and behaviorally extremely similar to humans: animal activists who impugn the relevance of our "animal models" to human disorders are mistaken when they claim that most research is medically worthless. On the other hand, in order to justify procedures judged horrific crimes if practiced on human beings, researchers must simultaneously allege that an immense moral gulf divides humans from non-human animals. Clearly, constructing morally relevant boundaries is critical to demarcate legitimate scientific research from criminal atrocity. 20th Century history shows how latching on to a real but morally irrelevant difference between groups can lead to a morally catastrophic outcome. Thus 70 years on, we would say that being Jewish is not a morally relevant difference. Jews may differ genetically from Gentiles, e.g. in their susceptibility to rare inherited disorders like Tay-Sachs disease, etc; but these genetic differences are morally irrelevant. So too, for most purposes, are the genetic differences between, say, Caucasians and black people, or Romani and non-Romani. Of course ethnocentric bias is not the same as anthropocentric bias. But the moral relevance of genetic differences between members of the genus Homo and our primate cousins is controversial at best: comparative genomics suggests that humans differ from chimpanzees by 1.7% of their genome; Humans and gorillas differ by 1.8%; humans and orangutans, 3.3%; humans and gibbons, 4.3%; humans and rhesus monkeys, 7%. Darwinians, if not Creationists, may find drawing any corresponding moral distinctions is a serious challenge.
One intuitive justification of our right to vivisect, factory farm and kill other sentient beings is that Homo sapiens is the most intelligent species on Earth. Comparatively speaking, most non-human animals are stupid. Stupid beings don’t really matter — though we’d rarely put it so crassly. By most criteria, mature humans are significantly more intelligent than mature apes and monkeys. As it stands, however, this criterion simply doesn’t work. After all, we do not regard it as morally acceptable to vivisect the mentally handicapped, orphaned infants, advanced Alzheimer’s patients, or even anencephalic babies of our own species. Even when their capacity to suffer is minimal, as is true of anencephalic babies, we recognize that they are incapable of informed consent. By contrast, we have no such qualms about terminating or experimenting on smart inorganic systems, even though the cognitive performance of silicon robots and digital computers in many domain-specific tasks (e.g. chess-playing) increasingly eclipses most human beings. If and when we build sentient robots, then their interests cannot be discounted either; indeed it may be hoped that their descendants will not discount ours. In the meantime, it’s questionable whether talk of the interests, welfare or rights of one’s PC, for example, even makes sense. Degree of sentience, not sapience, is the decisive criterion of moral status.
Yet how impartial are our criteria of sentience? In science, we normally go to extraordinary lengths to exclude possible sources of confounding bias. Hence the whole apparatus of double-blind, randomized, placebo-controlled prospective trials of potential therapeutic drugs in clinical medicine. These painstaking efforts are still frequently subverted by the cash nexus. Thus Big Pharma regularly undermines the most sophisticated methodology and prestigious peer-reviewed academic journals, as the seemingly never-ending scandals of recent years attest. However, the problem in medical ethics is worse. For when evaluating the ethics of conducting experimental research using non-human animals, no systematic effort is made to combat anthropocentric bias. Indeed drug companies invest huge sums of money covertly lobbying against even the most rudimentary measures of animal protection. Naturally, drug company spokesmen would plead they are pursuing remedies for distressing and sometimes life-threatening human diseases. Coincidentally or otherwise, these claims are often true. But commercial companies have a legal obligation to maximize shareholder profits. An impartial concern for animal welfare — whether human or non-human — would be unlawful under existing legislation; and this legal barrier is seldom breached.
One problem with discovering — or constructing — a universal ethic is that no scholarly consensus exists even among secular ethicists. But perhaps the most effective corrective to arbitrary anthropocentric bias is classical utilitarianism. It would admittedly be astonishing if an ethic first explicitly formulated by an 18th century English jurist and philosopher were to hold good for as long as sentient life itself endures. Maybe posthuman ethics will be inconceivably different to its human precursors — though if we ever "transcend" the pleasure-pain axis, would anything matter? Either way, a utilitarian ethic aspires to a God-like breadth and depth of empathetic understanding of other living beings; it can be applied to all sentient lifeforms at all times and places. "Negative" utilitarians stress the overriding moral primacy of minimizing suffering; "positive" utilitarians give equal weight to minimizing pain and maximizing happiness. In principle, advances in neuroscanning can make applied utilitarian ethics an objective and rigorously quantitative discipline. Intensity of pain, for example, can be measured via studies of nerve cell receptor density, neurotransmitter binding and gene expression profiles (etc), although a full-blown felicific calculus is beyond us — and will presumably challenge even the most futuristic supercomputers.
Of course, many bioethicists advocate a conception of (post)human flourishing that is richer than the classical utilitarian ethic as conventionally conceived; it is hard to measure or compare the neural correlates of, say, "eudaimonic well-being." However, a utilitarian ethic in its purest form allows the impartial measurement of morally relevant distinctions: not all discrimination between species need be arbitrary or self-serving. Thus lifeforms that lack a central nervous system are unlikely to possess a unitary experiential field. Therefore they are unlikely to feel pains over and above the comparatively minor pains of their constituent nerve ganglia. This distinction isn’t an invitation to moral complacency or an arbitrary phylum chauvinism: octopuses, for instance, are clearly sentient as well as intelligent beings. But the crude vertebrate/invertebrate dichotomy does license valuing the lives of, say, primates over earthworms. In an ideal world, and quite possibly our posthuman future, the well being of the humblest creature could be safeguarded along with the pinnacles of evolution. In principle, the molecular signature of unpleasant experience could be banished in all creatures great and small. Unless and until this utopian-sounding vision is realized, however, a utilitarian ethic recognizes the messiness of real life with its tradeoffs and sometimes ugly moral compromises.
The adoption of a utilitarian ethic nonetheless has radically counterintuitive consequences for the treatment of our vertebrate cousins. Pre-reflectively, we have a "dimmer switch" model of sentience: "primitive" animals have minimal awareness and "advanced" animals like human beings experience a proportionately more intense awareness. The problem with the dimmer switch model is that the most intense experiences, most notably phenomenal pain, are also the most phylogenetically ancient, whereas the most "advanced" modes, that is to say linguistic thought, are phenomenologically so thin as to be barely accessible to introspection at all. Our linkage of moral status to cognitive capacity, and linkage of cognitive capacity to degree of sentience and capacity to suffer, is intuitively appealing. Yet its basic premise has precious little evidence to support it. Whales, for instance, have bigger pain centers than humans; maybe they have bigger pains too.
A problem with endorsing a utilitarian ethic over a rights-based approach is that utilitarianism apparently permits non-consensual experimentation, not just on non-human animals, but on humans as well if the prospective payoff is great enough. However, utilitarianism and rights-based approaches to ethics need conflict less than is commonly assumed; and in practice they may even prove complementary. Thus the utilitarian may advocate a rights-based regulatory framework on the grounds it leads to better felicific consequences, whereas a (future) rights-based theorist may endorse, not just our inalienable right to "the pursuit of happiness", but the right to happiness itself. In principle, the biotech revolution and our impending technical capacity to reprogram the reward circuitry of the CNS may permit radical engineering solutions — and eventually the abolition of suffering throughout the living world. In one scenario, we might all enjoy the right to be maximally happy — where the inclusive rather than contrastive sense of "we" extends to all sentient beings. Critics will dismiss this prospect as speculative science fiction.
A counterargument to the universalist approach explored here is that ethics is subjective by its very nature. Therefore science "ought" to be value-neutral. This inference may charitably be described as paradoxical.
Might the impartial application of a universal ethic bring medical science to a halt — since its enforcement would drastically restrict the kinds of research practiced by vivisectionists on non-human subjects today? Laws and bureaucratic regulation can certainly retard the growth of scientific knowledge. The taboo on involuntary human experimentation since the Nazi era curbs access to all sorts of illuminating data that renewed human studies could provide. Likewise, extending the proscription of vivisection to other species would impede some kinds of scientific knowledge; but conversely, greater protection for non-human animals will accelerate the switch to the three Rs of enlightened biomedical research (Reduction, Replacement, Refinement). Critically, the exponential growth of computer processing power should allow the computational simulation of entire living organisms. Moreover, a utilitarian ethic does license a role for animals in research — human and non-human — insofar as suffering can be avoided. A host of human and non-human animal investigations don’t involve suffering, or at least needn’t involve suffering if an adequate legislative and enforcement regime were in place.
The biggest stumbling block to the adoption of an impartial ethic is probably our deeply felt moral intuitions. Consider a human baby, even a brain-damaged and extremely premature baby. Compare that baby with a mature pig or monkey. No abstract argument of moral equivalence can persuade most of us that the life or suffering of a human baby is morally equivalent to the life or suffering of the pig or the monkey. Shouldn’t our naive intuitions carry any weight?
I think the only honest answer is: no. Our intuitions are systematically biased. Evolutionary psychology explains how our moral intuitions and the rationalizations they spawn have been shaped by millennia of natural selection to maximize the inclusive fitness of our genes and not to track the welfare of other sentient beings impartially conceived. Many human cultures have found nothing intuitively wrong with aggressive warfare, slavery, wife beating, infanticide or female genital mutilation. Ultimately, folk morality is a doomed enterprise as hopeless as folk physics. A mature posthuman ethics, I’d argue, must be committed to the well being of all sentient life; and mature posthuman technology offers the means to deliver that commitment.