H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

David Pearce
October 21, 2010


Reality is big. So our optimism must be confined to sentient beings in our forward light-cone. But I tentatively predict that the last experience below "hedonic zero" will be a precisely dateable event several hundred years hence.

Here are five grounds for cautious optimism:

1)  We Shall Soon Be Able To Choose Our Own Level Of Pain-Sensitivity

It's hard to convey in words the frightfulness of unrelieved physical pain. Millions of people with chronic pain syndromes suffer severe physical pain each day. However, a revolution in reproductive medicine is imminent. We'll shortly be able to choose the genetically-shaped pain thresholds of our future children. Autosomal gene therapy will allow adults to follow suit. Clearly, our emotional response to raw pain is modulated by the products of other genes. But recent research suggests that variants of the SCN9A gene hold the master key. Thus in a decade or two, preimplantation diagnosis should allow responsible prospective parents to choose which of the SCN9A alleles they want for their future children — leading in turn to severe selection pressure against the SCN9A gene's nastier variants.

At present, we can't envisage safely choosing one of the (extremely rare) nonsense mutations of SCN9A that eliminates physical pain altogether. A future world of nociception without any phenomenal pain at all will depend on advances in neuroprosthetics and artificial intelligence — and an ethical / ideological revolution to match. Yet by selecting benign alleles of SCN9A both for ourselves and our children, the burden of suffering can be dramatically diminished.

2)  We Can Soon Choose How Rewarding We Want Our Daily Life To Be

The brain is a dauntingly complex organ. Yet the biological roots of mood and emotion are primitive and neurologically ancient. Their metabolic pathways are strongly conserved in the vertebrate line and beyond. This paper illustrates how the presence or absence of a single allele may dramatically enrich or impair the quality of one’s entire life.

If you were in a position to choose, which COMT allele would you pre-select for your future offspring? I predict most prospective parents will pick the "happy gene". Or would you prefer to play genetic roulette as now?

Of course, pitfalls abound. How can we guard against unanticipated side effects from novel gene therapies? What will be the inevitable "unintended consequences" of life changing innovation? And what will be the societal implications of a population biologically predisposed to enjoy richer and happier lives? But the ethical pitfalls of seeking to preserve the status quo are no less troubling.

Genetic tweaking to promote richer experience is just a foretaste of posthuman sentience. I predict that our descendants will enjoy gradients of genetically preprogrammed bliss every day of their lives.

3)  Steak Lovers and Vegans Alike Can Soon Eat Cruelty-Free Diets

Today, billions of sentient beings endure lives of unimaginable suffering in our factory farms. Factory farmed animals are then killed in our slaughterhouses so we can eat their dead flesh. Vegan animal activists campaign against the meat industry and also the wider exploitation of nonhuman animals. But the capacity to rationalize self interest is deeply rooted in human nature. So I think we need a fall-back strategy. Transhumanists believe that ethical problems frequently have technical solutions. Thus New Harvest is the world’s first organization dedicated to promoting research into the development of in vitro meat and other meat substitutes. Certainly, the path from today’s cultured in vitro mincemeat to mass produced in vitro steaks is technically challenging. But the development of cheap and delicious gourmet in vitro meat — of a taste and texture indistinguishable from the flesh of slaughtered animals — can potentially bring the era of factory farming to a close.

Traditionalists will say in vitro meat products are “unnatural”. But compare it to factory farming.

4)  Carnivorous Nonhuman Predators Can Be Phased Out Too

Perhaps the biggest obstacle to phasing out suffering altogether is wild animal suffering. Right now, billions of sentient beings in the wild are dying of thirst and hunger, or being disembowelled, asphyxiated or eaten alive by predators. Jeff McMahan's landmark article in the New York Times is the first print-published plea from a mainstream academic calling for predatory carnivorism to be phased out. Needless to say, the technical obstacles, notably pan-species fertility control, are immense. But the biggest obstacle to compassionate ecosystem design is simply status quo bias.

5) We May Be On The Eve Of An "Intelligence Explosion"



I believe the existence of suffering in the world is the most morally urgent challenge we face. Not everyone would agree; and even if it's true, the moral priority of abolishing suffering doesn't entail that suffering is best tackled by a direct assault. Reckless haste leading to a botched utopia could be disastrous. For a start, it's vital that the human species avoids the global catastrophic and existential risks we face this century. Nick Bostrom usefully conceives of existential risk as “a permanent curtailment of human axiological potential". Moreover humans are the only species capable of phasing out suffering throughout the living world. If we were to destroy ourselves, then pain-ridden Darwinian life would go on indefinitely. From a more personal standpoint, the prospect of posthuman paradise can seem almost cruel if we ourselves won't be around to enjoy it. This is one reason to celebrate the groundbreaking longevity research of Aubrey de Grey and his colleagues at SENS. However, many transhumanists believe that our most urgent priority is negotiating a "friendly Singularity" that they anticipate later this century. Springer have recently announced the first scholarly volume devoted to the technological Singularity hypothesis: The Singularity Hypothesis: A Scientific and Philosophical Assessment.

As a sceptical, "bioconservative" transhumanist, I still reckon getting rid of suffering will take hundreds of years and entail "cyborgification" rather than "uploading". Yet it's always good to take (very) seriously the prospect that one will be proved wrong. If a technological Singularity really is near, then the abolition of suffering is feasible this century.

Finally, I'd like to commend the Transhumanist Declaration (1998, 2009) with its commitment to the well-being of sentience. Transhumanists are a lively and diverse community. The Transhumanist Declaration is a valuable reminder of what unites us.

David Pearce is a vegan utilitarian. He is author of "The Hedonistic Imperative" (1995), which advocates the use of biotechnology to abolish suffering throughout the living world. In 1998, he co-founded the World Transhumanist Association, now Humanity Plus, with Nick Bostrom.

 

See Also

Transhumanism at Play

Transhumanism and Superheroes

Transgender, Transhuman, Transbeman

 

[heading]

Interviews

[/heading]

Alleivating Suffering & achieving Hedonic Zero / Altruism

The Naturalisation of Heaven - The Lotus Eaters - Happiness & Motivation

[quote cite="David Pearce" url="http://www.erraticimpact.com/~topics/html/philosophical_isms.htm"]The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life. This project is ambitious but technically feasible. It is also instrumentally rational and ethically mandatory. The metabolic pathways of pain and malaise evolved only because they once served the fitness of our genes. They will be replaced by a different sort of neural architecture. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. The world's last aversive experience will be a precisely dateable event.[/quote] [quote cite="David Pearce"]I predict we will abolish suffering throughout the living world. Our descendants will be animated by gradients of genetically pre-programmed well-being that are orders of magnitude richer than today's peak experiences.
- p.114 Ethics Matters by Peter and Charlotte Vardy - SCM Press, 2012[/quote] [box title="Video Interviews"]

For more video interviews please Subscribe to Adam Ford's YouTube Channel

[/box]

67 Comments

    Can you explain why you think that wild life is your property? I see many people thinks the same but when i ask them there is no meaningful answer. Same applies to any interventions in nature - like fishing for example. Fishing put pressure on species to become smaller and smaller. I want to keep fishes the way they are. How come people who likes to eat fish or some vegan radicals have more rights over it than me?

      This is not a question about who "owns" wildlife. This is about stewardship of sentient life by those agents who are gifted with intelligence, rationality, and ethical reasoning.

      If we decide to keep natural wildlife the way it is, even when we have the means and insight to change or replace it, we decide to literally torture each and every single sentient who suffers in it. And these sentients cannot prevent the suffering themselves, nor did they ever consent to their own existence within these systems to begin with.

        I do not think that humane morale applies to those who do not understand and share it. Whether it some people or animals. It is product of certain society and even today there is no one common, global morale. There are lot of examples of such differences - the way people see abortions, copyright laws, death penalty, free access to medicine. There is no god given morale or aim everybody should follow. Besides all people biologically same that's why certain generalizations are possible while animals are not.

          Your point about the relativity of morals is disconnected from your original point about ownership. By using concepts such as property, you imply that we should care about these concepts morally. If morality is relative, there's no reason why anyone should feel obliged to justify their interventions in wildlife to you in any way.

          However, I think it makes more sense to think of ethics as answering the question "What do I want to do?" rather than "What do I have to do?"

          Do we want to indefinitely perpetuate the involuntary suffering of uncounted billions of sentients who never even consented to their own existence just because it is natural? My personal answer is no, and I would be ready to cooperate with those agents whose answer is also no (if the consequences were to be thought through wisely enough).

    I'd like to interject my $0.02 again. Phasing out animal suffering does not require eradicating predatory species. Or to qualify -- once we have the necessary technology, we can conceivably alter animal behavior to prevent suffering, including predation.

    What's wrong with having a lion lie down beside a lamb, both creatures healthy, secure and content? If you find that image too jarring, and would prefer a vicious feline who would rend the other animal, answer this: do you think eliminating human disease is also a disturbing vision?

    For the record, I am in awe of many predatory beasts. I love the majesty of a tiger; I admire the power of a bear; I am fascinated by the habits of a leopard. And I'm dismayed that these creatures are endangered. I would rather see a species of gazelle go extinct than lose any of the big cats.

    But we are not discussing the animal kingdom as it exists today. We are speculating on what we should do, if anything, once we have the power to stop the suffering of any sentient being. Of course, one option is to practice non-interference -- allow animals to follow their evolutionary-dictated natures, and neither help nor hinder each. That's the desire many transhumanists would espouse.

    However, let's ask why we should follow this do-nothing approach. Does it appeal to an aspect of our conservative nature, some little traditional part of our mind that says we should not alter the world to the point where it's unrecognizable? Is the idea of an earthly paradise shared by man and beast, devoid of any violence and disease, simply too disconcerting because it's so far removed from the current state of the world?

    If that is truly the reason, then keep this in mind -- you're a transhumanist, and you've presumably already come to terms with the prospect of an indefinite human lifespan. You've completed the process of considering immortality and all its consequences for the planet. And you've decided that yes -- it would be a radical disruption to our world, but a *worthy* radical disruption.

    So view through the same lens the prospect of ending animal suffering, not by eliminating various predatory species, but by enhancing them so they are no longer compelled to predate; indeed -- so that they no longer *need* to predate. Is that more profound a shock to the natural order of things than 10 billion people living forever?

    From reading this article the word "suffering" is used pretty broadly but seems to mainly be directed at physical suffering. The emotional suffering, like depression, is also brought up with the idea of a happy gene, but the other ideas of suffering do not seem to be there. Really suffering can be anything the person doesn't really like and finds harmful. If you take the assumption that a world without suffering would be insufferable then reaching a utopia without suffering, in this example, would be impossible.

    The concept of removing all suffering and making everyone content all day is silly. Pain is motivation, it drives progress and innovation, it is a motivating factor like no other. Why would you want to achieve and strive for more, when you are always content and comfortable?

    On the other side, the risk of pain, of failure, of injury and death are all things that give success the intense satisfaction it gives. Pain is also an invaluable teacher, it can make you look at things in new ways, or makes sure that you never forget a lesson. Every scar on my body is a testament to attempts made and lessons learned; with adrenaline soaked, blood stained memories burned into each one.

    Some transhumanists seem to feel that the world should be made of nerf. There should be no pain, no suffering and infinite life. Don't get me wrong, I see the limiting of these things and the extension of life as good things. However, pain and suffering do have merit. Death's inevitable bite gives life an importance like no other.

    Will immortals savor the sight of a sunrise quite the same as a man who knows it may be his last?

    Death is also a way of removing old ideas. If a particular way of thinking or an idea becomes entrenched in a generation, an idea that is detested by the next generation, the old generation seldom changes perspective and the new generation gets the social change they want only when the old ideas literally die off. --imagine a person who at one time owned a slave plantation or a beaten down slave that lived to today; I see resentment running deep enough to last 200 years. Some ideas and views only die when the people who hold them do.

    As for removing carnivore species, that is so silly. Plants grow, communicate via chemical signaling, can move, anticipate damage and prepare defenses, and otherwise seem as though they do not want to die. They do lack a CNS but they do have primitive senses of touch, hearing, sight and smell and they do react to these stimuli. Should we remove all plant eating species as well? End plant suffering now!

    Individual cells on there own are alive, they are the discrete units that you and I are made of. At what point is the collection of cells considered alive? Why is the death of 1 cell okay but a collection of 10 trillion is not? Then to further say that the collection of cells must also be organized in a particular way be considered of alive. Human, fly, plant, we are all made of cells that want to live and participate in the big genetic algorithm called life.

    --i am pro abortion, but don't kid yourself, cells that are alive... are alive

    To impose our morality of the moment (that is right, morality is relativistic and can change) on nature; which is simply carrying out a vast genetic search algorithm (looking for local survival and reproductive optima).

    Is silly.

    If you have ever programmed a genetic algorithm you know that keeping all of your past generations alive is the worst thing you can do! Unless we want to manually control every aspect of the environment and have people managing populations and allocating resources to every field mouse and fly. We should just let the bloody program run.

      "Pain is motivation, it drives progress and innovation, it is a motivating factor like no other. Why would you want to achieve and strive for more, when you are always content and comfortable?"

      There's not only one state of pain, and not only one state of comfort. There are varying degrees. The execs at Google have lifestyles that are quite comfortable. Does that prevent them from progressing and innovating? Not on your life! They don't need to be threatened with starvation or physical torture in order to be productive. The idea of achieving greater technological heights and even more financial milestones are incentive enough. They want to build a better Google because it will reward them in ways that your primitive pain-motivator model just can't comprehend.

      "Every scar on my body is a testament to attempts made and lessons learned; with adrenaline soaked, blood stained memories burned into each one."

      Perhaps you want to impress us with your machismo. It didn't work. You can have all the scars and bandaids you want -- intellectual feats and rewards are still more important to some of us. I also workout and run to keep fit. But if I could take a pill to derive the same benefits, I probably would. Does that mean I would then waste time riding the couch, doing nothing but enjoying the feeling of comfort? No, I would simply put this new time to other uses, knowing that I no longer need to worry about exercising. But if someone else prefers to spend an hour a day keeping fit the traditional way, hey he has every right to. Just don't assume I'm unmotivated because I depend on technology for some of my physical well being.

      "Some transhumanists seem to feel that the world should be made of nerf. There should be no pain, no suffering and infinite life."

      That's quite the straw-man argument, buddy. The desire to eliminate disease and violence doesn't equate with a wish for a world made of nerf. We had pain and suffering galore in the early days of our civilization, and indeed it did motivate us to progress and innovate. But that doesn't mean we require the same types of motivation today. Do you get out of bed and go to work out of fear that you'll otherwise be eaten by a leopard? Do you check your voicemail because you believe it will prevent the fire from going out in your cave? No, you do these things for reasons that our primitive ancestors wouldn't even comprehend.

      In the same way, folks in the future will be motivated by reward systems that you and I cannot conceive of today. They will act to avoid undesirable consequences ("if we don't get that antimatter generator completed on time, the Neptune colony will turn to fusion for its energy needs"). And they will be spurred on by incentives ("once I develop an android that sees in infrared, I can be the first to explore the earth's inner mantle"). The future will contain endless excitement and opportunities, without the need for physical pain and suffering.

      You're correct that plants and individual cells are alive. But as you point out, they do not possess central nervous systems. As far as science is aware, they are not sentient entities. We may change and update this view down the road, but for now we can capture its essence in the phrase "plants do not think or feel pain." We will also need to revisit this subject once we create machines that can think and act as humans. Should they be granted the same rights? It's going to be a very protracted and heated debate.

      As for it being "silly" to "impose our morality...on nature, which is simply carrying out a vast genetic search algorithm," would you call it silly to alter the course of an asteroid about to collide with the earth? After all, celestial objects move according to the laws of physics, and an asteroid's motion is simply nature carrying out a vast classical mechanics computation.

      Perhaps you'd prefer the asteroid to strike the earth, because then you may acquire a few more scars to brag about. :)

        It has nothing to do with "machismo", I am simply pointing out that pain, suffering and death are all things that do have merit. They are beneficial to the system in some ways, and removing them will have far reaching consequences. I was pointing out the opposing side; some people embrace their mortality. Some also feel that while minimizing pain can be good, removing it may not be the best idea.

        Think about this:
        How exactly do you stop all people from having the ability to use force?
        Do you engineer them to all be happy and hyper-empathetic?
        Will you remove the person's capacity to get angry or fell passion?
        What happens when you have a large group of happy docile people and a large group of people that don't under go this engineering?
        So long as you have people capable of using force as a means to an end, what will you do to stop them from using it?

        In my remarks about scars was illustrating that taking some people take risks, get hurt, and sometimes a bloody nose has benefits. Violence, death and pain, can bring out the best as well as the worst in humans.

        "Do you get out of bed and go to work out of fear that you'll otherwise be eaten by a leopard? Do you check your voicemail because you believe it will prevent the fire from going out in your cave? No, you do these things for reasons that our primitive ancestors wouldn't even comprehend."

        I do these things because if I don't I'll lose my job. I will then run out of money, my heat would be turned off, I would then loose my shelter and suffer the pain of hunger and exposure. Hunger and exposure are two things they would very much understand.
        If food and shelter will always be afforded to me in abundance, why should I ever check my voicemail (I hate that thing)?

        As for the CNS argument, you missed my point. You are saying that one type of death is ok but another type of death is not ok. If you take a bacterium that is motile and expose it to something that it is harmful and it swims away from it, isn't that a sign of not wanting to die? If you touch a Mimosa pudica and it retracts to protect itself, isn't that a sign that it is trying to stay alive? You base your argument on saying only things with perceptions of pain that match our own should be protected.
        Don't forget: you are an animal, animals are not A living thing, they are COLLECTIONS of living things, they are stem cells that differentiated and work together to carryout a reproductive process. Your death will be a death of a collective.
        You have taken to putting values on cellular life based upon how that collection is arranged. I think that no matter how you want to slice it, death is death and thus your morality is flawed.

        -You never really addressed my point that human morality is something that changes and is based upon our sociology and biology. Tweak either and our morality changes. It is relative. Should we constantly update nature every time society makes a change in its thinking?

        -You totally avoid my defense of death as a necessary evil.

          "I do these things because if I don't I'll lose my job. I will then run out of money, my heat would be turned off, I would then loose my shelter and suffer the pain of hunger and exposure. Hunger and exposure are two things they would very much understand."

          I agree with this, it is a cornerstone of capitalism. However, this doesn't explain why you don't just kill yourself instead of doing anything. There are easily researchable painless ways of committing suicide. This means no one has to work if they really absolutely don't want to. Nonetheless, most people decide not to kill themselves. In other words, there must be positive incentives in life that are strong enough for people to keep going even if they have painless suicide methods available.

          In the absense of coercion, if working conditions were improved, I think that most people would still work at least part-time. And some rather creative and innovative people would still work 50+ hours a week voluntarily. Not because of negative motivations, but because of positive ones (they don't have to, but they want to).

          A key question in the idea of biotechnological abolishonism is how far this concept can be taken. Is it possible to replace all the negative motivators with positive ones?

          "If food and shelter will always be afforded to me in abundance, why should I ever check my voicemail (I hate that thing)?"

          Maybe you'll just be bored at some point and check it for the heck of it? If you were materially secure, would you really completely stop communicating and being productive?

          "So long as you have people capable of using force as a means to an end, what will you do to stop them from using it?"

          More checks and balances in a world of greater interconnectedness and transparency. Collective counter-force where necessary. They key here is that this could still work in a world without suffering, if civilization is organized in such a way that people who use force for ulterior motives will have more disadvantages than advantages as a consequence. Much aggression and destruction is not based on rationality but on darwinian impulses and power imbalances.

            "Maybe you'll just be bored at some point and check it for the heck of it? If you were materially secure, would you really completely stop communicating and being productive?"

            If we have machines that will be more advanced than people and will be able to be modified and updated faster than biology, at that point, why do anything at all?
            Why be productive when a machine can do it better?
            Why communicate when the pleasure you seek will be on tap?
            Why not just remove the feelings of boredom?
            Why not pass control off to the machines and live in engineered bliss?
            Why would we need those motivators anymore?
            It stands to reason that at that point, if machines are going to outpace us in every way, indefinitely, why try at all to keep up in a race we can't win?

            Just give birth and wire your child to live in an unceasing extacy beyond all that exists and have everything they need supplied to them for eternity.

            Is this not an end that suits you?
            Should we always have the drive to create and think and explore even when the job can be done better and faster by a machine?
            Why?

            Because there are higher things a person can strive towards, because Socrates convinced you?

            Why allow a person to have to grapple with the meaning of life and the universe if it can bring nihilism and mental anguish?
            Elimination of pain is the goal, no?

              "Is this not an end that suits you?"

              It is indeed an end that suits me. However, unless your utopian assumption about an all-providing infallible care-giver machine is realistic, wire-heading children from birth is not an evolutionary stable solution.

              Providing incentives with subtle and profound reward stimuli for productive behavior within the right equilibirium could theoretically be stable for a long time. Already, people are doing enormous amounts of voluntary work in virtual worlds without gaining any personal material advantage from it (and without being coerced into it by threats of suffering).

              Here's a recent TED talk that highlights this: http://www.ted.com/talks/tom_chatfield_7_ways_games_reward_the_brain.html

                *"It is indeed an end that suits me."
                This is where we differ. I see things like the struggle to understand; the risk of harm; the risk of distress; and the daily struggles of life; the things that make the game worth playing.
                I think risks could be minimized and comfort could be improved (ending mass starvation, improving health care, extending lives, limiting war), but the total removal of pain and death make life a dull and endless game. What fun is a game you can never lose?


                I want you to go and play a game of monopoly. Only I want you to play without money. You are also allowed to build an infinite number of hotels on a property. I want you to first divide the board up equally among the players. Then allow everyone to build whenever they want. There is no possibility of ever loosing anything you've amassed. No money is to be traded and the game will never end. You only get the satisfaction of building more hotels.
                Would you play the game forever? Would you want to engineer your offspring to be able to play that game forever? Isn't a game with some inequality and risk of loss more fun?

                *"Providing incentives with subtle and profound reward stimuli for productive behavior within the right equilibirium could theoretically be stable for a long time."
                Who gets to define productive vs. unproductive? Who picks the new set of reward stimuli? What will prevent this ability from turning some people into happy mindless drones? Who gets to set the balance and who prevents its exploitation?

                *"Already, people are doing enormous amounts of voluntary work in virtual worlds without gaining any personal material advantage from it (and without being coerced into it by threats of suffering)."
                Once a persons fundamental needs are taken care of, what does a person do with their time? We are a species that is naturally goal oriented. When we are no longer presented with the difficult goals of survival we set artificial goals to satisfy the hard wired urge. We pick a target and work on it until we meet the goal, then we set the next goal. In the case of survival we set goals of creating fire, hunting, building better shelters. If these people didn't volunteer there skills in developing virtual worlds I would argue that they would garden; memorize hockey stats to win the office pool; build a deck; cook. They would set some other goal and do that because they Need to. If we don't carry out these artificial goals and spend all day doing nothing we often laps into depression.
                --We already have a built in system where we endure suffering when we aren't productive in our own eyes and gain deep satisfaction when we are.
                This interplay of happiness and despair is the essence of what makes life worth living.

                  anonymous, you say that "the interplay of happiness and despair is the essence of what makes life worth living."
                  But it's despair that leads almost a million people in the world to take their own lives each year. For hundreds of millions of depressed people, life seems empty of meaning and purpose. Redesigning our reward circuitry so we can enjoy gradients of intelligent bliss isn't a recipe for a "dull" life, but for an exhilarating life. Compare how the happiest people today also tend to find life intensely significant. Take care of happiness and the meaning of life takes care of itself. Indeed, in future boredom can become biologically impossible since its molecular signatures are absent. Instead, information-signalling dips in fascination can preserve critical insight minus the nasty raw textures of tedium as we know it now.

                  If contrast effects were really essential to a "meaningful" life, then we could implant false memories. But they strike me as wholly redundant.

                  When suffering of any kind becomes optional, would you really impose it on others? How much, how often, and enforced by what means?

                    I hate to break it to you, but life is empty of meaning and purpose. You are going to die and whether it is today or tomorrow the universe doesn't care. If you choose to check out early, that is your right. But I do feel that if a person wants to be treated for mental illness or depression they have that right as well.

                    The funny part about treating a person for depression is that it doesn't make them numb to life's ups and downs. In fact, drugs that do make patients emotionally flat, while befitting the patient by removing their depression and stabilizing mood are rejected by patients and they switch to other drugs. Ask your GP at your next visit. People like to have a broad emotional pallet.

                    "Redesigning our reward circuitry so we can enjoy gradients of intelligent bliss isn't a recipe for a "dull" life, but for an exhilarating life."
                    It is a very lame gradient. You either feel cocaine happy or heroine happy. I like the current range of, can't get out of bed because my wife left me and orgasmic extacy. It colors the world.
                    A constant baseline reward comes at no cost in your world. There will be no risk of feeling down when we fail or are unproductive. If you are a belligerent person who alienates all the people around them, who is going to get upset and cry? or yell at you? will you feel bad? If you can never get depressed, or feel down, how will you know when you've done others wrong?
                    A world of constant bliss is dull and artificial.

                    "When suffering of any kind becomes optional, would you really impose it on others? How much, how often, and enforced by what means?"
                    I would no force it upon anyone. Your brain is nicely tuned to do it. I want to see mental illness, genocide, mass starvation and diseases eradicated like you do, but I think the human pleasure pain dynamic is a design quirk that has a lot of benefits and should not be discounted. I would always let people choose, it is their body, but I would caution as I am now, that pain has its benefits to humanity. It is one of the biggest things that makes us who we are. Anger, sadness, pain, loss, grief, empathetic feelings of suffering/pain, are all very important to holding society together and can facilitate some of the greatest moments in a persons life.

                      Anon, you rightly say: "I would not force it upon anyone."
                      But this is the point: suffering of any kind should be optional. Today suffering is mostly involuntary - part of our evolutionary heritage because a predisposition to pain, anxiety, resentment, jealousy (etc) maximised the inclusive fitness of our genes in the ancestral environment of adaptation. As it happens, I predict that eventually all suffering will be phased out. But prediction is distinct from advocacy. We're not talking about hauling off the pain-ridden kicking and screaming into the pleasure chambers.

                      You say, "A world of constant bliss is dull and artificial."
                      A world with clothes is artificial. This is not a compelling argument for nudism. Whether or not a phenomenon is "artificial" is "natural" is morally neutral.
                      Not even constant bliss would be "dull" - wireheading never ceases to be exhilarating. But the argument here is not about uniform bliss but whether we should elevate our hedonic baseline - and the hedonic baseline of our future children - so they can be predisposed to enjoy richer lives. To recalibrate the hedonic treadmill isn't to dismantle it altogether. Thus other things being equal, increased happiness enhances diversity because it broadens the range of stimuli an organism finds rewarding. By contrast, depressives tend to get "stuck in a rut".

                      You rightly remark on the affective flattening induced by selective serotonin reuptake inhibitors (SSRIs) - misleadingly marketed as "antidepressants". But other, unlicensed agents are "emotional intensifiers" - drugs like MDMA ("Ecstasy") that flood the synapses with serotonin, dopamine and oxytocin. MDMA induces profound emotional release. In future, we will enjoy the option of regulating our own emotional depth safely and sustainably - and with a degree of fine-grained control that is unimaginable today.

                      The question of the Meaning Of Life is perhaps a little too broad to be tackled here. But if you are in agony, the meaning is self-intimating; i.e. to get the pain to stop. In our biotech-driven future, we can extend our compassion not just to members of own species, but to all sentient beings.
                      Life based on gradients of intelligent bliss will certainly feel more subjectively meaningful than today's normal hedonic baseline of "blah". But will posthuman life really be more meaningful? Well, is a joke that makes you laugh really funny?

                      A society where life can be one long MDMA ("Ecstasy") trip... and you expect productivity. HAHAHAHAHA!

                      If we have "the option of regulating our own emotional depth safely and sustainably" why would anyone ever come down form their self controlled ecstacy?

                      Why be productive when life is one long orgasm?

                      Anon, there are indeed technologies that can deliver sustained orgasmic pleasure. MDMA is not among them. My point in citing MDMA was to contrast its emotional depth with the affective flattening induced by SSRIs. We need safe, sustainable mood-brighteners and gene therapies that promote emotional depth rather than shallowness.

                      Productivity? In the contemporary world, a strong positive correlation exists between the degree of trust in a society and economic growth. Safe and sustainable enhancement of oxytocin function can potentially make us not just more trusting but trustworthy - potentially enhancing economic growth as well.

                      Despite the copious oxytocin release induced by taking MDMA, no productivity-explosion can be anticipated from all-night raving. But whoever suggested otherwise?

                      "Safe and sustainable enhancement of oxytocin function can potentially make us not just more trusting but trustworthy - potentially enhancing economic growth as well."

                      Is this really a good idea? There's not one person on the planet whom I would blindly trust, including myself. And this always seemed like a healthy attitude to me. Enhanced well-being and enhanced compassion - these make sense and could herald enormous potential. But enhanced gullibility?

                      It may be wiser to create new modes of transparency and boost critical thinking in a more interconnected world of checks and balances. As for economic growth based on trust, that trust has to be well-deserved rather than being based on fraudulent or intransparently distorted markets. I find it hard to see any benefit from artificial levels of trust where transparency and critical feedback should serve as a corrective instead. If this corrective works, artificial trustworthiness is redundant. If it doesn't work, the remaining defectors would receive a free pass, adding burdens and opportunity costs to society.

                      Hedonic Treader, all good points. I was just trying to counter the idea that life based entirely on a biology of information-sensitive gradients of well-being will be inconsistent with continued economic growth - and the misconception that extreme well-being must be one-dimensional, akin to "getting high".
                      The role of oxytocin in cognitive processing is actually quite complex. (perhaps see
                      "Oxytocin Makes People Trusting, but Not Gullible, Study Suggests":
                      http://www.sciencedaily.com/releases/2010/08/100824103535.htm )
                      Although we might all benefit from having our native oxytocin function enriched, it would probably be premature to talk of its mass-adoption in the near-future. We don't yet know enough.

                      Ah, now I see. Thanks for the link.

                      "I would no force it upon anyone. [...] I would always let people choose"

                      Children can - and often do - suffer long before they develop the ability consent to anything in this world. The same is true for all non-human animals capable of suffering. Your position is therefore inconsistent.

                      You sir are wrong.
                      "I would no force it upon anyone. [...] I would always let people choose"
                      Means that I would not force people not to undergo this treatment, I would leave the choice to remove all pain from their life up to them.
                      If a child is under a person's care it is then the guardian's decision.
                      Ultimately if the child does suffer in childhood, when they reach adulthood they could freely choose to remove all pain from their lives.

                      As for animals, my choice not to apply this technology to animals would not constitute "forcing" them to undergo pain. They could evolve, become sentient, undergo a singularity and wipe out their own pain -- not a joke, we did.
                      If they could also make a conscious choice not to endure pain and ask for this intervention then it should be given to them.

                      I am not forcing pain or technology on anyone or anything.

                      Anon, we wouldn't choose to have children with a default genetic predisposition to neuropathic pain - and offer the option to cure the condition only when they are adults. So will it be ethical to have children with a default predisposition to psychological pain and offer a cure only when they reach adulthood? Maybe. But I predict one day bringing such suffering into the world will be reckoned as akin to child abuse.

                      When does neglect shade into culpability?
                      If one sees a young child drowning before one's eyes, then uncontroversially one has a duty to intervene - even if one is not the responsible caregiver. An analogous - though less demanding - duty is usually recognised when the victim is a "domestic" non-human animal like a drowning dog. Now intuitively, there is an immense gulf between caring for free-living human and nonhuman animals in faraway Africa, say, and our responsibilities closer to home. But technological growth - and maybe an "intelligence explosion" - are poised to give us the same degree of control over the rest of the living world as we have over the fate of a toddler in his paddling pool next door. Yes, one could let the unfortunate toddler drown - or some anguished creature starve, die of thirst, or get eaten alive by parasites or predators. But soon such (in)action will be a deliberate choice - I hope an ethical choice.

                      Ah, I see I misunderstood this point. In other words, your position didn't even imply that involunatry suffering is an ethical problem unless it's actively inflicted. Basically, it boils down to "If they can't consent, it's okay to (passively) let them suffer until they develop the ability to explicitly consent to interventions." According to this logic, vaccinating a child would be immoral because they can't consent to it, and if they get sick, it's not your fault.

                      "I am not forcing pain or technology on anyone or anything."

                      It seems to me that forcing pain on non-consenting sentients is exactly what we would be doing. I don't see the moral distinction between passively letting the involuntary suffering of entities who can't stop it themselves happen, and actively forcing them to suffer.

                      But never mind, we don't need to delve too deeply into this line of argument here.

                      Emotional and intellectual pain can become a learning experience if the right attitude is adopted. Pleasurable experiences are the (necessary) intermittent rewards of life for pulling through and not quitting life early.

                      As an intellectual species, physical pain is much less valuable to us than mental pain/anguish/disappointment; therefore I'd sooner see physical pain reduced than painful emotions.

                      But the key is balance and the depressed do not have the advantage of that balance and therefore need the help of technology.

                      Anon, you rightly speak of the importance of "balance". The question is where we want to locate the default "hedonic set-point" around which our lives fluctuate - and to set the upper and lower bounds of our hedonic range? Twin studies confirm a high degree of genetic loading. Crudely speaking, the higher the set-point of our hedonic treadmill, the richer our lives (cf. the paper on COMT gene variation cited in the article). As a statement of empirical psychology, pleasure is the "engine" of value-creation - and turbocharged pleasure centres create the potential for hypervaluable states. Indeed by posthuman standards, even the richest peak experiences of Homo sapiens may be boring and trivial.

                      In contrast to temperamentally happy people, depressives tend to find life valueless and meaningless. Severe depressives tend to be nihilistic.

                      The enemy of learning and personal growth isn't lifelong bliss, but uniform bliss [and likewise, uniform despair]. So if you could choose, what default hedonic "dial-setting" would you select for your future offspring - if preimplantation diagnosis etc gave you a range of options? Or would you prefer to throw the genetic dice instead as now?

                  "What fun is a game you can never lose?"

                  Two replies:
                  1) There's a difference between losing a game and being tortured.
                  2) I can think of games you can never lose but still very much love playing. Sex is an example.

                  Furthermore, the analogy "life is a game" only goes so far since games are usually voluntary, while existence is involuntary for most sentients except for those who have the power to kill themselves painlessly. For instance, this applies to non-human animals.

                  "Who gets to define productive vs. unproductive? Who picks the new set of reward stimuli? What will prevent this ability from turning some people into happy mindless drones? Who gets to set the balance and who prevents its exploitation?"

                  Mindless drones are economically unprofitable since they're not creative. As for the potential power abuse, I agree that it's dangerous as hell. I think this is true of all transhumanist key technologies. That's why I was talking about checks and balances + global transparency of unprecedented levels earlier (not just top-down surveillance but also bottom-up and peer-to-peer, if you will).

                  It may well be that all of this fails and we end up with severe power distortions or even a malicious singleton. OTOH, it's hard to see how we could justify perpetuating the status quo of misery and suffering into the future without intervention. Not to speak of the momentum of the technological progress, which is hardly stopped by the worries of critics currently.

                  "We already have a built in system where we endure suffering when we aren't productive in our own eyes and gain deep satisfaction when we are."

                  Merely contradicts your earlier point that you need physical threats to get people to even check their voicemail.

                    "Two replies:
                    1) There's a difference between losing a game and being tortured.
                    2) I can think of games you can never lose but still very much love playing. Sex is an example."

                    1) I agree, I never said torture was a good idea. Getting dumped, having your company fail, your child dieing, seeing another person suffer injustice... I do think you should feel some pain.
                    2) Really? You are emotionally and physically vulnerable. You are taking the risk that your partner won't reject you. You try to give them pleasure while they do the same. You hold a person's body and trust and you or them can destroy that trust at any time. It is exciting and pleasurable and it is a finite experience; all things I find great about it. Have you never had a disappointing or regretful encounter? Have you never felt nervous before a new partner that you care about? To just see it as pleasure with out risk or adventure makes me sad for you.

                    *"Merely contradicts your earlier point that you need physical threats to get people to even check their voicemail."
                    Hardly a contradiction. I said that I check my voice mail because I have to do it to maintain my shelter and food supply. If I was left to my own goal directed tasks, as I hate voice mail, I would never choose that as my goal. I would paint a picture or argue in an internet forum. But I would still never check my bloody voicemail. I also stated that people are goal directed because when they feel that they aren't, their own mind takes care of dishing out the pain. That is another example of pain being a motivating factor. If you didn't feel down when you were consistently lazy you wouldn't have the same goal directed attitude.

                    *"Mindless drones are economically unprofitable since they're not creative. "
                    Oh right, that is why we don't use automation in factories, it isn't profitable.
                    I write code for a living that, given a set of desired output parameters, will create combination of plant processes and using a GA to refine a population of possibilities, once some optimums are selected, it then runs models for each with varying parameters also using a GA and picks the winners. My code is mindless but it comes up with some creative plant designs. Imagine code that can generate this code! Who needs people's creativity when a computer can do it better and faster.

                    *"It may well be that all of this fails and we end up with severe power distortions or even a malicious singleton."
                    Yes that is a very very real possibility. So why take the risk when we can instead of removing pain and suffering, actually just make the world better, which will in turn make people happier. We could work to eliminate disease, torture, mass starvation, mental illness and improve social justice, become vegan etc. Thus changing the status quo for everyone. People as a whole get happier because things are better, not because you rewired them to be. We don't get utopia but we get a great world that people will naturally be happy in.

                      *"To just see it as pleasure with out risk or adventure makes me sad for you."
                      Again, as with the "monopoly without money" analogy, challenge and adventure are hardly the same as risks of real involuntary suffering. Take sex when it does go wrong (e.g. rape). Or take the unlucky people who suffer from depression caused by love sickness without any volunatry control over their feelings. This can turn even generally pleasurable aspects of life into profound negative experiences, and I submit that it is not this risk of suffering that makes an experience worthwhile - the experiential value occurs when things go right, not when they go wrong. Of course, you may draw valuable life lessions from suffering. But in extreme cases, these lessons are piling up in a life you regret having lived. There's quite a number of people living sub-par lives like this.

                      *"Oh right, that is why we don't use automation in factories, it isn't profitable."
                      Automation is the exact reason why mindless drones aren't going to be profitable. When we have AI that surpasses humans in all regards (i.e. code that generates code etc.), humans will generally be unprofitable - the outcome of that scenario depends on what the AI agents value. If they are superior to humans and they don't value human life, humanity won't persist. That is a realistic scenario under these premises, but I find it hard to see why exploitation of humans as worker drones is realistic. But even if it were, artificially happy drones are less problematic than many forms of exploitation that exist today (from the subjective perspective of those who are being exploited).

                      *"So why take the risk when we can instead of removing pain and suffering, actually just make the world better, which will in turn make people happier."
                      The risk is there anyway. Technological progress has never been stopped by ethical argument alone, and it won't be stopped globally unless we already *have* a singleton. Do you really think nations like China or even the US military will forgo the use of enhancement technologies just because they could lead to power concentrations and abuse? And in the long run, too?

                      Furthermore, conventional ways of "making the world better" and biotechnological ways could be combined. Yes, you can reduce torture by a greater global network of transparency and insistence on humane treatment of prisoners. You can try and find out where secret prisons are, you can reward whistleblowers etc. But once you can give people the ability of willful analgesia, the option to choose their own level of pain sensitivity dynamically, people are going to want that as well. When conventional interventions fail, I'd rather be a torture victim with the ability to tune out the suffering than one witout.

                      *"Take sex when it does go wrong (e.g. rape). Or take the unlucky people who suffer from depression caused by love sickness without any volunatry control over their feelings. This can turn even generally pleasurable aspects of life into profound negative experiences, and I submit that it is not this risk of suffering that makes an experience worthwhile - the experiential value occurs when things go right, not when they go wrong."
                      If you no longer feel upset or angry or violated when you are raped, why would rape be wrong? One party gets the sex they want and the other suffers no mental anguish or physical pain. A net gain for pleasure.
                      If every person in an active war zone is content and go about their lives and feel nothing when a child is killed, why should you take issue with it? If you can't feel the suffering of others, why would you try to help them?

                      *"Of course, you may draw valuable life lessions from suffering. But in extreme cases, these lessons are piling up in a life you regret having lived."
                      You'll be happy, but it won't change what happened to you, or where and who you are. You will be happy, but if you don't feel the pain of the injustices and tragedies befallen you, you won't have any drive to confront and change them. If you were in an abusive relationship you could simply keep dialing up your pleasure and guarantee that the situation will never change. If you were wronged by a law and didn't feel upset by it, what would drive you to try and change it?

                      *"Technological progress has never been stopped by ethical argument alone"
                      Human cloning has an international convention banning it, a ban created due to the ethical argument against it.

                      *"But once you can give people the ability of willful analgesia, the option to choose their own level of pain sensitivity dynamically, people are going to want that as well."
                      I know you are right here, but I hope they will choose to use it wisely, knowing the dangers to society.

                      ***I will concede that technology that could limit pain and help with painful memories and experiences would be good; if and only if they were applied in a fashion similar to antidepressants. ie purely as treatment for individuals too overburdened to deal with their emotions, NOT applied to the masses as the ultimate opiate. The solution may simply lie in regulation.

                      *"If you no longer feel upset or angry or violated when you are raped, why would rape be wrong? One party gets the sex they want and the other suffers no mental anguish or physical pain. A net gain for pleasure."
                      Not necessarily, since it prevents one party from doing things that are more productive/pleasurable instead, or choosing their sexual relationship policy according to their preferences. For instance, a couple may have chosen to amplify their desire to remain in an exclusive monogamous relationship, a desire that would then be thwarted. In the case of violence resulting in permanent physical or cognitive damage, the opportunity cost is even clearer.

                      *"If every person in an active war zone is content and go about their lives and feel nothing when a child is killed, why should you take issue with it?"
                      Because I see their loss as an opportunity cost of happiness, positive experience, and unique biographical narrative.

                      *"If you were in an abusive relationship you could simply keep dialing up your pleasure and guarantee that the situation will never change."
                      Or you could confront the situation because you don't fear the confrontation, and you see the abuse as a restriction to your desire satisfaction, which is an opportunity cost for happiness and narrative depth (versatiliy of positive experience).

                      *"If you were wronged by a law and didn't feel upset by it, what would drive you to try and change it?"
                      What would it mean to be wronged by a law in a world without suffering? It would probably mean to have desire satisfaction thwarted for no relevant social reason. That opportunity cost would drive people to try and change it - especially in a world free from fear.

                      ***I personally have three basic values: The absense of suffering, the presence of pleasure, and the versatility of experience (what I call narrative depth). In a world without suffering, two of the three remain, and they would still motivate. Except that it would be positive rather than negative motivation.

                      "Or you could confront the situation because you don't fear the confrontation, and you see the abuse as a restriction to your desire satisfaction"
                      But why would you ever feel that your "desire satisfaction" is restricted when you can simply move your set point. I suppose my point is that if you can move your set point at will, you can take any situation, no mater how unequal, exploitative, sad, or bleak and make it the best experience of your life.

                      You could make life as a beaten slave feel more satisfying and pleasurable than penning the great american novel on trip to disney world on heroine. You could turn sitting on a hot stove into extacy. If you can turn every experience, even sitting on the street homeless, into the maximum orgasmic extacy, or even just to a level where you love life on the street, why would you try and move your position. Even if you work hard and become a corporate exec, on Sunday morning when you eat your cheerios and look out the window, you'll be no happier than the guy on the street, or the man shot in the leg in a war zone.

                      Yes logically, you should not regard life in some of those situations as personally fulfilling. But what is fulfillment? I submit that to us, personal fulfillment is being happy, content and satisfied in what we have done/doing. For these people with control over their own feelings of happiness they can simply change their set point and be happy, content, satisfied and feel the most deep fulfillment in doing anything. What would stop you from setting your set point so high that successfully taking a breath gives you feelings of happiness and fulfillment comparable to being elected president or see your child graduate school.

                      Who decides what is happy enough? Will people who set their set point higher than others see the people not as happy as them as being in a state similar to that of a tortured person today?

                      If you look at the economics of the situation, if you have to put zero effort into being happy all the time; and you have to put in some effort to feel really happy; but you will never feel disappointed if you miss out on the really happy feelings; what incentive will you have to put in the effort.

                      Like I said before, the total eradication of pain and suffering may not lead to the utopia you think it will.

                      Hedonic Treader, I believe we have reached an impasse. You deeply wish to remove suffering, and I feel that you're heart is in the right place. However, I still think the total and complete elimination of pain will result in some major negative consequences to society, open opportunities for great exploitation, imbalance of power and remove one of the key things that makes life imho worth living. I don't think we will reach consensus on this issue ever, but I must say that debating you has been enjoyable. I hope that you reply to this, and will check for it, but I will have no future things to post. You have the last word.

                      *"I suppose my point is that if you can move your set point at will, you can take any situation, no mater how unequal, exploitative, sad, or bleak and make it the best experience of your life."

                      Yes, if you could max out your subjective experience independent from context, it would be similarly unstable as wire-heading - such a state could only be remotely realistic in a care-giver scenario where some external agency takes care of all problems for you while you're permanently blissed out.

                      However, the gradients approach (http://www.gradients.com/) doesn't necessarily entail the ability to max out pleasure at will, just a set-point which is higher than today (and ideally, never needs to go below zero). It remains to be seen whether this will be possible and evolutionary stable.

                      *"If you look at the economics of the situation, if you have to put zero effort into being happy all the time; and you have to put in some effort to feel really happy; but you will never feel disappointed if you miss out on the really happy feelings; what incentive will you have to put in the effort."

                      The value difference needs to be high enough, and the effort could actually be rewarding in itself as an activity. There is no guarantee that this can work, and I'd expect failure modes on the way to a stable equilibrium while trying this, but that is always true for life; there is no fail-safe master plan for a better future.

                      *"Will people who set their set point higher than others see the people not as happy as them as being in a state similar to that of a tortured person today?"

                      That is a good question. In fact, I hope they would see it at least as an opportunity cost worth taking seriously. A world without such a sense of compassion could potentially introduce new risks, especially if suffering isn't completely abolished, or it is encountered elsewhere in the universe or re-created by the happy posthumans in simulations or other re-creations. In order to prevent new suffering and existential threats, they would certainly need the ability to empathize with others as well as care for their own well-being. Global artificial indifference would indeed be very risky.

                      Thanks for the debate, and take care!

        Imposing our morality on nature is not the same as trying to avoid disaster by moving an asteroid out of the way! Why would you want to destroy the predator/prey/competition dynamic that has got us evolutionarily this far? That just seems daft to me!

        Disease and violence are some of the driving factors behind evolution. If these 'spurs' disappeared, then evolution would grind to a halt, there would be no imperative to evolve a new behaviour/sense/biomechanism/whatever.

        As far as monetary motivation is concerned, it that capitalist dogma that has created much of the pain and suffering that exists today. As a species, we are not altruistic, and we are not naturally sedentary. Eliminate violence, and it will emerge again as an advantage to whomever wields it.

          Anonymous, selection pressure isn't going to slacken. On the contrary, the imminent reproductive revolution of "designer babies" entails that selection pressure is likely to intensify as our nastier alleles and allelic combinations are weeded out of the genome.

          You speak of "imposing" our morality of Nature. Have we the right to "impose" our morality on cannibals? Or ethnic groups that seek to enslave other groups? I hope so. And if so, then can one justify a laissez faire attitude to members of other species while simultaneously condemning such a "hands off" attitude for members of other ethnic groups?

          What (if anything) is the morally relevant difference between anthropocentric bias and ethnocentric bias?

            "What (if anything) is the morally relevant difference between anthropocentric bias and ethnocentric bias?"

            Humans can make a conscious educated decision on a topic, can factor in the moral and ethical ramifications and can under their own free will ask for assistance (in the case of the enslaved) or in this case ask to undergo bio-modification to remove pain.

            Animals cannot do this.

            Since they cannot make a rational choice as to whether or not to undergo such a procedure, nor can they understand the effects it will have on their possible evolutionary path, to FORCE this change upon them would be morally wrong.

              The issue of consent is indeed important. Informed consent depends on intellectual competence. Since nonhuman animals are intellectually akin to small children, does this mean we should leave them to their fate? This inference doesn't follow. Uncontroversially, we have a duty to act in the best interests of small children and the mentally handicapped. If they need, say, immunization or a pain-killing injection, we have a duty to offer care even though they doesn't understand the implications of the procedure. So what grounds have we for withholding care from their non-human counterparts at an equivalent level of functional development? Yes, over a course of several million years, the evolutionary trajectory of the species in question might otherwise take paths we can't anticipate. But this conjecture would be weak grounds for not wiping out, say, malaria and the sickle-cell mutation alike in sub-Saharan Africa - despite evidence of selection pressure at work in malaria-ridden ethnic groups. By the same token, restricting our compassion to members of our own biological species would be both arbitrary and callous.

              The distinction between "humans" and "animals" is deeply rooted in our Judeo-Christian culture. But ever since Darwin, a dichotomy between human and nonhuman animals has been intellectually untenable.

    I dont believe in a god but i am spiritual. The natural world is primal, violent and runs on animals base instincts. If you disrupt that then you are disrupting the whole planet on a fundamental level. The disruption of the worlds natural energy fields would be immense, think of the religions of tribal and indiginous people worlwide. Do you think they would thank you for killing of the living incarnations of so many of their gods and deities? The idea is incredibly short sighted and make me feel physically ill imagining a world where a Wolf pack is grazing alongside deer, or even worse: A total extinction of wolves. You and your ideas disgust me on a spiritual, emotional, intellectual and physical level. You should reserve such ridiculous ideas for your own website where intelligent people dont have to read the shit spilling out of your mouth. Humans mere existence on the scale it is today causes suffering in numerous animal species, so unless you want to kill some of your precious herbivours such as rhinocerous, giraffe and elephants to prevent them the suffering of having their habitat destroyed by human civilisation spreading i suggest you rethink your hypothesis. Maybe kill all humans to reduce animal suffering considerably. Just make sure you go first, i promise we will too, just want to make sure of your commitment to this idiocy first.

    The idea that interfering in evolution and nature to eliminate predation as the "moral" thing to do is a joke. Take a course on ecology and then think about the effects of removing predators, massive population booms of prey animals would destroy grasslands and forests as they grazed and fed and reproduced till there was virtually nothing left in 15-20 generations, there would need to be mass cullings across every ecosystem which you would remove predators from. How are you going to make sure the oceans maintain a balance if you remove every predatory fish, most fish arent herbivores, they feed on smaller fish and a host of other small ocean life. What you are proposing isnt a utopian dream, its a hippies drug induced hallucination. Lions eating grass and leaves? Predators are the most incredible animals on this planet and if i saw you in the street i would feel the urge to throw you under a bus, because idiocy on such a grand scale should be prevented from infecting the human gene pool further.

      Death threats? Seriously?

        Chicken?

          Debate ethics?

          It sounds like James Walker has a serious Beef with David Pearce.

    I noticed the author seemed to be using a type of utilitarianism, where happiness was to be maximized, and suffering was to be minimized. While I am on the same page, it is important to understand the yin-yang of it - you can't have happiness without suffering (I know there is unnecessary suffering, but that doesn't detract from this point). I'm not being rhetorical, but instead am just pointing out that an existence free of risk and pain might not be as attractive as you might assume. By the way, dominance/submission isn't pretty, but frankly I don't see an alternative unless you want to engineer a stagnant superficial system. Finally, beware of the utility-monster when doing evaluation using utilitarianism. Again, this isn't rhetorical: if one segment of the population is given more weight, it can result in a lopsided evaluation of the utility of a system.

    Brad, the Transhumanist Declaration doesn't endorse any particular ethical theory. Deontologists, virtue theorists and utilitarians alike can endorse the well-being of all sentience. Yes, as it happens, I'm a utilitarian - technically a negative utilitarian. But most transhumanists aren't classical utilitarians at all.

    The alleged relativity of (un)happiness? The claim that "you can't have happiness without suffering" is intuitively plausible. But empirically it's false. Compare the millions of people in the world today who endure chronic pain and/or depression. Some severe depressives can't even imagine what it is like to be happy. Does the absence in their lives of any affective states above "hedonic zero" somehow soften their experience or make it any less real? Sadly not. For evolutionary reasons, extreme hyperthymia or unipolar euphoric (hypo)mania are quite rare in contemporary humans. But if the spectrum of behaviour they promote had been fitness-enhancing in the ancestral environment of adaptation, then they would presumably be today's norm.

    A recipe for stagnation? Uniform bliss, like uniform despair, would indeed be a recipe for a stagnant society. But recalibrating the "set-point" of our hedonic treadmill can permit life based on information-sensitive gradients of well-being. Compare the subjective effect of different alleles of the COMT gene in the paper cited in Section Two above. Information - sometimes snappily defined as a "difference that makes a difference" - doesn't depend on our absolute location on the hedonic scale. In a more speculative vein, I conjecture that the functional analogues of discontent can co-exist with a posthuman biology of gradients of sublime bliss beyond anything physiologically feasible now - "hypervaluable" peak experiences as natural as breathing.

    James, yes, phasing our predators would indeed lead to an ecologically unsustainable explosion in the population of herbivores ("prey") - but only in the absence of a comprehensive program of fertility-control in our wildlife parks. If you're interested in the latest developments in immunocontraception and other technologies of fertility-regulation, you might like to follow up some of the hotlinks in:
    http://www.abolitionist.com/reprogramming/index.html

    Noir, could you possibly clarify what you mean by "disruption of the worlds natural energy fields"?

    @David Pearce
    1) There are so many species that eat other species to manage them all would be insane. Look, there are millions of animal types. Each type has a vast array of reproductive and environmental requirements that control its survival. Couple these requirements with randomness like: constant changes in infection types/rates, weather, food resources, genetic mutation etc.
    You end up with not only a probabilistic system but one with nearly infinite degrees of freedom.
    2) How are you going to control all of the types of predatory fish, insects, birds, lizards etc.?
    3) What about species that are dependent on predators for survival? And the animals that are dependent on them?
    4) How do you stop or model the transfer of chemical hormones or other methods of population control through the food chain or water contamination? How do we know how various chemical cocktails or genetic engineering will have on the wider ecosystem?
    5) How will you gather the data on every animal, insect, plant and every concentration of every chemical in every location and every birth and death in real time? How will you develop a model from this data? How will you test the viability of the model? How do you plan to control this model? Then once modeled, how will you test the viability of modifications to variables and predict their outcomes? How do you factor in all the random and uncontrollable inputs?
    6) In the event of a miscalculation and the over/under population of a species occurs, what humane balance to you suggest? Since this will have thrown your whole model off how do you compensate for the far reaching effects this will have throughout the world?

    In systems control terms, you've got a system with billions of degrees of freedom, billions of random inputs, huge dead times, lots of black box systems with their own built in control systems, and limited/delayed ways to effect the parameters we can control. Oh, and even though some bits can be tested individually, they all will have unexpected/unpredictable effects on other systems. Any person worth their salt in control theory will tell you that you can't do it!

    --I suggest you take a class in ecology and one in advanced control theory.

      Sven, first, many thanks for such an incisive and information-dense response that raises precisely the kinds of technical issues that any serious effort to build cruelty-free ecosystems will face. However, I'm going to stick to my guns: suffering in the living world may indeed persist indefinitely; but if it does, then it will endure because we choose to preserve it, not because the problem is computationally intractable.

      For a start, the exponential growth of computer power means that by the second half of this century, we'll have the computational resources to monitor and micromanage every cubic millimetre of the planet with unimaginable finesse - probably down to the molecular level and below. I'm much much more cautious in my metaphysical assumptions than some of my "Singularitarian" colleagues 
      ( http://en.wikipedia.org/wiki/Technological_singularity
      But in the coming era of mature quantum computing, our computational resources will dwarf the naive human imagination. Anyhow, to be more concrete: if we are ethically serious about getting rid of wild animal suffering by means other than outright destruction 
      ( cf.  http://robertwiblin.wordpress.com/2010/01/21/just-destroy-nature/ ) 
      then we need to construct a "hierarchy of suffering", and only gradually work away "down" the phylogenetic tree. Do we need to preserve every species of beetle as distinct from iconic vertebrates? So, simplifying here, the larger vertebrates have larger pain centres and therefore presumably suffer more than their humbler cousins. By way of illustration, here is a single "case study", chosen because elephants have the largest mind/brains of terrestrial vertebrates. To provide cradle-to-the-grave welfare care for the existing population of 500,000 or so African elephants we'll need: tagging and GPS tracking, biochip implants to monitor health status, immunocontraception to regulate fertility, food and water provision in time of famine and drought, veterinary and obstetric services, and late-life orthodontics (mature elephants slowly starve to death - sometimes slowly eaten alive by lions after they finally collapse through inanition - when their final set of molars wears out in their sixth or maybe seventh decade. So durable artificial molars must be fitted instead: this has only ever been done with captive elephants to date.) The only natural predators of elephants - normally only the youngsters - are lions. So the 30 000 odd African lion population would need to be tagged and neurochipped for surveillance, behavioural modification and policing purposes too. Abundant in vitro mincemeat can guarantee the lion population's nutritional status - a stopgap prior to metabolic tweaking. The cost at current prices? Several billion dollars a year, by my back-of-an-envelope calculations. Is it worth the price? Well, how would you like to be slowly and agonisingly eaten alive or starve to death? 

      Critically, this would be a job for professional ecologists, population biologists, GPS tracking specialists, wildlife contraception experts, dynamical systems control theorists and other professionals, not philosophers! We're concerned here with proof-of-concept prior to detailed feasibility studies rather than implementation details. Nonetheless I'd like to tackle in depth elsewhere some of the detailed points you raise - I'll add the URL when I do. Finally, I'd like to stress again that I think our overriding priority now is dismantling the apparatus of factory farming and industrialised killing of the meat "industry". Getting rid of the death factories is a lot technically easier than ecosystem redesign.

         

        Ok, you seem to not get it.
        It isn't a matter of better computers.
        You just can't do it.

        Your case study of elephants proves my point. Look at all that is needed to carry out what you want with ONLY elephants. Now, there are between 3 and 30 million species in animalia. Living everywhere, from the bottom of the ocean to the highest mountains.
        Now, in your case study you are looking at one animal. What happens when the model is wrong and there are too many elephants? Or how can you predict that a virus will mutate and swiftly kill off 25% of them? What effect will that have on the plant life? How will that change in competition for resources effect other species?
        How do you predict that a 100 year flood 10 years from now won't kill off most of a species food supply and adjust the birth rate accordingly?

        Modeling chaotic, random, systems with billions of degrees of freedom cannot be modeled no mater how many sensors you have.

        Biology isn't like analytical chemistry or civil engineering.

        Putting that aside:
        To extend your rational, couldn't we wait for the singularity, sterilize all life and let them all die off. The number of dead would be far less than the number that would die over the time it takes to implement this program and still less than those that would die accidentally during the program. Since computing power will be so great, we can move our consciousness into the machines. Not needing organic bodies anymore, we'll no longer need plants or animals or food.
        Bing bang boom, we live forever, we no longer feel pain, global death and suffering will be minimized and removed.

          Sven, humans already employ all manner of captive breeding programs, "rewilding" initiatives and "wildlife mananagement" programs beyond mere habitat destruction that interfere with natural ecosystems. All your well-taken points on the unforeseen ramifications of ecosystem management - and the challenges posed by chaotic [in the technical sense] systems that show exponential sensitivity to initial conditions - apply to our initiatives in so-called Conservation Biology. The difference is that our existing interventions focus on species-conservation rather than promoting the subjective well-being of sentient individuals. So on ethical grounds i think the explicitly normative discipline of Conservation Biology needs to be complemented by the explicitly normative discipline of Ethical Biology. Phasing out suffering isn't some utopian Five Year Plan; it's a long-term strategic goal that may take ages to bring to completion - likewise eliminating involuntary suffering in humans. Needless to say, I don't know the timescale. The molecular signature of last fleeting invertebrate pinprick may be many centuries or millennia away. I'd guess several centuries. And of course this momentous transition may never happen. The Transhumanist Declaration expresses a commitment, not a prediction.   

          You ask why not eliminate suffering by phasing out sentient life in the meatworld. Well, I guess one might ask a dedicated pain specialist why he doesn't eliminate physical pain by euthanising his patients. Most people aren't negative utilitarians. Even if one is a negative utilitarian, the best way to reduce suffering isn't to plot schemes of death and extinction. Rather it's to promote the development of technologies to deliver the well-being of all sentience in the highest possible amplitude of Everett branches in our forward light-cone.      

            I didn't say kill them. I said "sterilize" i.e. prevent all life on earth from breeding. They live and die naturally. After that point all pain is removed and life continues in the machines. It isn't that much farther from controlling all breeding and animal design on earth. It is more like-- if a person was going to give birth to a child with a horrible pain condition (in this case the ability to feel pain, when all human life has surpassed this) should they be allowed to reproduce?

              Sven, you raise difficult issues in the philosophy of mind and consciousness that I purposely avoided. The belief that not just intelligence, but also phenomenal mind, is substrate-neural is quite widely shared among AI researchers. And if phenomenal mind really does "emerge" at some level of computational abstraction, then non-violently phasing out Darwinian life in the meatworld and living in digital nirvana instead may be feasible. However, as someone who believes that even humble animal mind/brains are actually quantum computers, and our qualia are "program-resistant" features of basement reality, I'm probably not the best person to engage you in debate here. An interesting speculation nonetheless.

    For now i think it is impossible to achieve your goal without eradication of every single animal with a brain. Because if you left only herbivores their number will be controlled by other factors like bacterial deceases or hunger. Everything in biosphere is interconnected with each other. Btw. you didn't mention ageing but ageing also can produce certain level of suffering. If you make animals immortal than soon or a later they will have nothing to eat because evolution for plants won't stop. As for eradication i believe you'll have problem convincing people. In fact nature gets more protected, not less.

      Beo, if we judge a cruelty-free world is ethically desirable, then the systematic application of depot-contraception and other technologies of cross-species fertility control can manage populations in our future wildlife parks. Today such management is done only for single species in isolation e.g. immunocontraception for elephants in the overpopulated Kruger National Park in preference to the cruel practice of culling. But massive cross-species extension will be feasible as our "circle of compassion" expands in conjunction (I hope) with our technological capabilities.

      Radical antiaging technologies? Almost certainly we'll use them both on ourselves and our animal companions ("pets"). I suspect they'll benefit at least vertebrates in our future "wildlife parks", too, in preference to costlier geriatric care.     

    Mr. Pearse; Were you recently discharged from a facility such as "Bridewell"? Good Luck to you brits on your continuous search to breed out the good dental/small ear/attracted to opposite gender gene! Now if you could only work on the complete annihilation of that nasty gene that persistently causes british imperialism, the beautiful, natural world would then return to true "Utopia"! Up the One True Kingdom, "KERRY"!

    David Pearce, transhumanism and veganism are by far not the same. Moreover, as far as I read vegetarian and vegan forums, I have impression that many vegans would eliminate most of the humanity for eating meat and other exploitation of animals if they could. And others rarely pull them up.
    I'm convinced that people are unconditionally BETTER than cows, pigs and dogs. At least because we can think (and do think) about nature well-being, while they don't.

    Yes yes and yes again! For a long time I've believed that, once we employ technology to end human suffering, we must extend these benefits to all sentient beings. David, I agree with you more than 100%.

    But I can imagine the status-quo bias you mention. Many folks who would agree with abolishing human-inflicted animal cruelty, would balk at the idea of preventing animals from hurting each other or being inflicted with disease ("but that's just how the natural world works...").

    They would accuse you of trying to play God. And in a weird way, they would be correct. You would use advanced AI and nanotech to prevent any sentient creature from suffering. It's an endeavor for which every god we've ever created should have strived. And one at which all of them have done a phenomenally lousy job.

    "4) Carnivorous Nonhuman Predators Can Be Phased Out Too

    Perhaps the biggest obstacle to phasing out suffering altogether is wild animal suffering. Right now, billions of sentient beings in the wild are dying of thirst and hunger, or being disembowelled, asphyxiated or eaten alive by predators. Jeff McMahan's landmark article in the New York Times is the first print-published plea from a mainstream academic calling for predatory carnivorism to be phased out. Needless to say, the technical obstacles, notably pan-species fertility control, are immense. But the biggest obstacle to compassionate ecosystem design is simply status quo bias."

    I think we have a major problem with people who think there is something wrong with the natural cycle of things. It is humans that have upset the natural cycle through gross misunderstandings and by another gross misunderstanding of nature do we decide to phase out carnivorous predators because they are too violent for our tastes?

    Hard-core vegans and other people that are afraid of blood should smarten up and see that the world is so much more complex than their fragile and rather limited moral views show them it is.

    Compassion also knows when to kick ass and take names, compassion isn't a nice lady that goes around and gives everyone flowers. True compassion is allowing something to develop in accordance to it's own free will, that means letting that someone or something be itself. Nature has done this for the longest time and it hasn't seemed to have a problem with it until we came along and messed it all up with a myriad ideals that are far-detached from true nature.

    There is such a thing as intelligence, and there is such a thing as human nature - it goes both ways, nature is not degeneracy but rather is based on evolution. Arguably, we are the first species that can alter it's own evolution by it's own volition - humans are teleological, but are we to interfere in the evolution of other species just because we think it's gross to watch? Grow up and get real, there's far more serious things going on in the world.

    We worry about animal rights when we don't even have the slightest clue about human rights. Your moral compass and priorities are ass-backwards as with the rest of humanity, we can only fix the world around us by fixing our broken selves and living responsibly.

    Jevgen, most transhumanists are not vegans, and most vegans are not transhumanists. So the two categories clearly aren't co-extensive. But the implications of a commitment to the well-being of all sentience are more radical than a casual endorsement of the Transhumanist Declaration might suggest. It's got a lot more intellectual bite than the usual high-mind platitudes.

    You write: "I'm convinced that people are unconditionally BETTER than cows, pigs and dogs." One might equally say that the great majority of human adults are BETTER than human toddlers. Cognitively, it's true. But if we have duty to care for the interests of human toddlers, then on what grounds may we neglect the wellbeing of their functional equivalents? How may we best guard against arbitrary anthropocentric bias?

    Anastasius, you write: "I think we have a major problem with people who think there is something wrong with the natural cycle of things."

    Permit me a thought-experiment.  If the transhumanist vision of the wellbeing of sentience ever comes to pass, would you seek to reintroduce pain, suffering, depression, aging, death and all the innumerable cruelties of Darwinian life "red in tooth and claw"? If so, on what grounds? And if not, doesn't your position  run the risk of arbitrary status quo bias?       

    http://home.freeuk.net/russica2/books/vrun/10.html

    "Our common aim is prevention of the dying out of whales. How are we to achieve this noble aim, I ask you? In my opinion, the most effective method is extermination, for when they are fully exterminated, none will be left to die out."

    I see David Pearce changed this a bit by replacing "dying out" with "suffer" and whales with herbivorous. I afraid he might switch to saving humans same way!

    The free will argument is noteworthy. How are we to know whether or not an as-of-yet unborn human (or non-human sentient being) will desire a different kind of existence than preceding beings?

    Obviously, if we tampered with their genetics "for their own good," they would never know. On the other end of the spectrum, who really desires pain as a beneficial feature of life? We often say that it "makes us stronger" or helps us to "appreciate the good things in life," but to me that's always seemed like an emotional-based way of coping with what is beyond our perceived ability to change.

    As for "upsetting the natural cycle," recognize that this is what humans have been doing on this planet for thousands of years. Agriculture, the domestication of animals ... need I say more. The philosophical/moral imperative here is to what end, in what ways, and how much. This is precisely why this issue fascinates me.

    As for synthetic meat, I'm all for it. If we can mass-produce cultured meat that's virtually indistinguishable from the flesh of dead animals, why not?

    You don't have to be 'BETTER' to suffer.

    Congrats to David for his approach. An additional 'technoscientific' perspective regarding human transcendence of suffering should be considered, I suggest.

    Algonomy is a proposed discipline that is concerned with the knowledge and management of the objective phenomenon of suffering. That new approach can be helpful in various ways :

    1) Algonomy deals with suffering itself, directly, purposedly, enduringly.

    2) Algonomy considers suffering as one single whole phenomenon that must be controlled as satisfactorily as possible.

    3) Algonomy takes into account all aspects of the problem of suffering.

    4) Algonomy allows to act in a strategic fashion for the control of suffering.

    5) Algonomy would welcome transhuman knowledge and abilities for the mastery of suffering.

    I'm broadly sympathetic to many of the sentiments expressed in this article. However, the view that predators should be phased out strikes me as naive and somewhat dangerous. Setting aside the argument that predation is essential for the fitness of the prey species, surely you are imposing human emotions onto creatures which are not human. Yes, suffering sucks whether you are a human or an antelope, but do we really have a moral imperative to intervene? Do we exterminate the predators or remove their teeth and claws so they can no longer kill - maybe feed them intravenously so they don't starve? Either way you are still causing suffering. Or do we completely reengineer all ecosystems so that everything we find squeamish is removed? Surely it would be easier to wipe out all life on earth and replace it with mechanical impersonations of animals.
    Also, you advocate biotechology as a means to eliminate suffering in both humans and animals. But surely the advances in biomedical science required to achieve this will require countless in vivo experiments and a mountain of dead animals. Does the end justify the means?
    Sorry if this seems like an attack. I think that the elimination of suffering is a courageous goal but these two issues have always bothered me when I visit your site.

    Of course not all suffering is physical. Much of human suffering is psychological. I don't believe that addressing that is a matter of technology. It takes "raising consciousness", the person growing wiser, over time.

    Pain as such is just information. If it was just a high priority notification delivered then easily shut up when we "get the message" then that would not be something that needs eliminating. You could engineer as less obnoxious notification though.

    Emotion is not all primitive. Emotions are also automated valuations, valuations that have been driven into the subconscious. They are lightning like valuations of good and bad [for me]. That sort of role for emotion is very useful. We just need better understanding and control over the programming.

    Genetic and other tinkering for more rewarding happy experiences is a great thing. Or it is as long as we do not wirehead to just feel good whether we are achieving or experiencing any actual values in reality or not.

    When you talk about phasing out other animals that prey on one another your effectively advocating the destruction of many species. That is part of eliminating suffering? Swell. Then a much more powerful benign alien race could just the destruction of humanity on the basis of eliminating suffering. In point of fact you cannot eliminate suffering across the entire animal kingdom without wiping out almost all of it.

    Personally I have no problem if a posthuman paradise arrives after I die. I would prefer not to die and not to miss it of course. But I am delighted if humanity (or our future variants) gets there at all. I am pretty sure most evolved intelligence species in this universe don't make it to that.

    Samantha, AAG, Beo, many thanks for your critical feedback. There are indeed some futurists who argue we should in effect wipe most wildlife out, or rather allow most species to go extinct:
    http://robertwiblin.wordpress.com/tag/nature/ 
    Arguably, an extinction scenario is a disguised implication of a classical utilitarian ethic as well i.e. don't we have an obligation to use biotech and IT to create blissful postDarwinian supebeings with Jupiter-sized pleasure centres?
     
    But that's not what I'm advocating. Neither the negative utilitarian - who thinks we have an overriding obligation to abolish suffering - nor a transhumanist, who subscribes to the Transhumanist Declaration's commitment to the wellbeing of all sentience, need endorse species extinction.

    Critically, if we want instead both to:

     1) retain a recognisable approximation of today's iconic species in our wildlife parks

     2) abolish the cruelties of Nature

    then something like the compassionately run ecosystems discussed in
    http://www.abolitionist.com/reprogramming/index.html
    will be indispensable. Such ecosystems will be costly and ambitious, yes, but technically feasible. 
             

    There is the possibility that interventions to re-design natural ecosystems and re-program predators turn out to be too complex or unstable in the long term. However, we are already reducing natural ecosystems even today and replacing them with civilization instead. If we managed to create highly efficient artificial resource cycles that can maintain civilization without an abundance of natural wildlife, replacing it is another possible option, which seems perfectly fine to me.

    As for our moral responsibility to intervene, try to overcome the naturalistic fallacy and compare the situation with one of human suffering: Do we have a responsibility - or at least a thorough justification - to intervene when children are inflicted with pain and suffering by their parents? We could declare this to be a private affair of strangers, not worthy of our interventions. However, there is an overwhelming social consensus that this is not a valid ethical judgment.

    "Hard-core vegans and other people that are afraid of blood"

    Misrepresentation. You don't even address the problem of suffering from the POV of the entities who are non-consensually forced to experience it. Once you learn to live up to the standards of a valid intellectual discourse, try again.

3 Trackbacks

  1. [...] in the human context can also be applied to non-human animals. Two good examples highlighted in an article published in H+ Magazine last year include the reduction or elimination of physical pain and the development of in-vitro meat and [...]

  2. [...] God-Like Brain? the goods just keep coming. In an article written in 2010 for H+ Magazine entitled, Five Top Reasons Transhumanism Can Eliminate Suffering written by David Pearce once again shows where the transhumanist mindset stands on ethical matters. [...]

  3. […] transhumanists is one of optimism and excitement. Far from impoverishing the human experience, they argue, the elimination of pain, suffering, and physiological and psychological limitations will enable us […]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community