What Does it Mean to Be a Transhumanist?

To me, transhumanism is a temporary movement — transitional. Its role is to help individuals and society transition to living in a world where some portion of society technologically transforms their minds and bodies on both incremental and fundamental levels. This might vary from getting a Google-connected neural implant to uploading one’s consciousness into a virtual world. We transhumanists consider (cautious!) developments along these lines to be a good thing, and feel that the most pressing objections and concerns have been adequately addressed, including:

What are the reasons to expect all these changes?
Won’t these developments take thousands or millions of years?
What if it doesn’t work?
Won’t it be boring to live forever in a perfect world?
Will new technologies only benefit the rich and powerful?
Aren’t these future technologies very risky? Could they even cause our extinction?
If these technologies are so dangerous, should they be banned?
Shouldn’t we concentrate on current problems…
Will extended life worsen overpopulation problems?
Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?
Isn’t this tampering with nature?
Isn’t death part of the natural order of things?

The key is to see “Transhumanism” as a philosophy being just a temporary crutch, a tool for humanity to safely make the leap to transhumanity. Transhumanism is really only simplified humanism. Eventually, transhumanists hope to see a world where a wide variety of physical and cognitive modifications are available to everyone at reasonable cost, and their use is responsibly regulated, with freedom broadly prevailing over authoritarianism and control. When and if we arrive at that world in one piece, everyone will become de facto transhumanists, just as today, most people are de facto “industrialists” (benefit from and contribute to modern industrial society) and de facto “computerists”.

It is also possible to imagine someone who doesn’t anticipate taking advantage of transhumanist technologies being in favor of “transhumanism” nonetheless. That is, insofar as transhumanists competently and openly discuss the potential upsides and downsides of certain ambitious technological pathways such as extreme life extension and artificial intelligence, and make progress towards beneficial futures. Since widespread cognitive and physical enhancement is something that will soon effect everyone, including the unmodified, everyone has an obvious stake in the trajectory of enhancement technologies even if they do not personally use them.

Transhumanism can also be viewed as a discussion primarily among those who anticipate taking advantage of enhancement technologies before most others. As such, transhumanism forms a beacon that alerts the rest of society to likely changes and informs society about the kind of people who are most interested in human enhancement. Since certain “transhumanist” technologies, particularly intelligence enhancement, may prove to have decisive power over the course of history in the centuries ahead, it is important to examine the groups pursuing it and their motives.

For instance, DARPA is a hotbed of enhancement research. So, the role of the transhumanist is to alert society to that fact, ask them if they care, and if so, what they think about it. Is it a good thing that the development of human enhancement is being spearheaded by the United States military?

A transhumanist elicits opinions and perspectives of human enhancement from a variety of commentators who might not spontaneously offer their opinions otherwise. This includes critics of enhancement such as The New Atlantis, representing the “Judeo-Christian moral tradition”.

Another purpose of the transhumanist is to be a concentrated source of facts and opinions on the concrete details of proposed enhancements, with facts and opinions clearly distinguished from each other. In theory, if the long-term dangers of a particular new technology or enhancement therapy plausibly exceed the benefits, transhumanists are responsible for discouraging the development of those technologies, instead developing alternative technologies that maximize benefits while minimizing risks. It would be easier for transhumanists to divert funding away from dangerous technologies, than, say bio-conservatives, because researchers under the influence of the extended transhumanist memeplex are the ones developing the crucial technologies and bio-conservatives are not.

A transhumanist is not just a blind technological cheerleader, enraptured by the supposed inevitability of a cornucopian future. A transhumanist should acknowledge the hazy and uncertain nature of the future, accepting beliefs only to the degree that the evidence merits, guided not by ideology but by flexible thinking, always welcoming criticism and views contrary to standard orthodoxies.

Michael Anissimov is the editor of H+ magazine and media director for Singularity Institute.

10 Comments

  1. I am astounded at how many singularitarians and transhumanist’s don’t seem to have a problem accepting the idea that robots will eventually succeed human beings and make us extinct.

    We will, assuming we survive, develop computers that can do most things better than humans, but humans, and post-humans, need to have control over the machines. As Bill Joy and others have pointed out, computer robots making their own decisions could view humans as using up resources the robots require and so they could get rid of humans.

    We do not want to replace humans, we want to evolve beyond humans to future species, higher species, and eventually we want to evolve all the way to Godhood. If we become robots we become extinct. We cannot achieve real immortality by downloading our minds or our consciousness into machines, that is not immortality that is death.

    I have always sensed a sidestepping or even a timidity leading to bias among many of the singulatarians against the subject of future natural human evolution because the subject of real post-human evolution has been politically incorrect. Who will be the elites who decree our merging or oneness with robots, and how will they be different from the World War Two dictators?

    I affirm the idea of a moral future with codes to deal with technical advancement. Between nature and machines I am on the side of nature. We cannot ignore the order, or disorder, of nature. Nature is bigger than human inventions. We can be aided by new technology, but not destroyed by new technology. I believe that life is evolving beyond the human species to Godhood, so I am certainly not a Luddite. Our mission is to help life—not merely artificial life—evolve to Godhood, and Godhood is not a cyborg.

  2. “And one more thing, that I think is very important. I agree that we must abandon “ideology”. “Capitalism”, “communism”, etc. those are not productive or well-thought ideologies, I personally have never understood why people pursue such shallow ideas.”

    You can’t do it, unless you plan to change the human mind first. The human animal is primarily tribalistic, and religion/ideology is an adaptive, evolved behavior that ensured more babies for the members of those well-knit ideological groups in the past. The human being is a political animal, and you can’t change it unless you alter its genes.

    ” We must abandon old economic/social obsessions, as well as religion/nationalism, all the baggage of human history that can only stall us. ”

    But because the human’s predisposition towards ideology is an evolved, adaptive behavior, getting rid of the “tried-and-true” baggage of history which still produced the Renaissance and the Scientific Revolution is a bad idea. There are worse ideologies than Christianity and Free-Market Capitalism that people can succumb to, you know…

    Also, you fail to consider that some political systems in human history are superior forms of societal organizations compared to what we have today. I suggest you start reading Mencius Moldbug’s blog right away, it will dispel the notion of political progress in the world in the last century.

    ” However, of course, I think there is a lot of merit in “utopian” thinking, such as socialism. Why not? Utopia is within our grasp, for the first time in history, and we had better consider it seriously. That is to say, I suggest we focus on the positive outcomes that we can realize.”

    Because utopia rapidly turns into dystopia if you ignore the human nature (or rather natureS, because the human races are much more different than feel-good people would have us believe), as determined by human genes.

    “That is how we can eradicate war: by cleaning the animal from our veins.”

    Agreed. Did you know that, if you are White, Black individuals you meet on the street are 5.2 times more likely to mug you? Also, groups of Blacks are 15 times more likely to attack you than groups of Whites. The situation in Africa is even less encouraging (to say the least).

    So, even between human populations there is a difference between how much of an animal
    is there in human veins.

    Are you perhaps willing to start here? No? From my experience, transhumanists are willing to talk the talk but they aren’t willing to walk the walk. Even today, we know of a way to at least improve human nature (intelligence+conscientiousness+peacefulness) in the future. It’s selective breeding AKA eugenics. It could be done in a non-totalitarian fashion, through voluntary means, by creating embryos from successful high-IQ individuals and paying low-IQ surrogate mothers to bear them instead of their own likely low-IQ children. But people would rather deny the uncomfortable truth that genes are important, and that a population with high average IQ (which can be achieved through eugenics) has a much larger proportion of Einsteins and Kurzweils and Feynmans and Yudkowskys (yes, modern-day Ashkenazi Jews with their high average IQ and disproportionate involvement in science are a product of selection for intelligence in the Middle Ages )

    • Putting Yudkowsky in that list is an epic mistake 🙂 He has made absolutely no significant scientific contribution. Writing half sane blogs and some irrelevant semi-technical papers shouldn’t be considered genius, sorry, but he really has no place in that list. He claims to be working on self-improving AI, I’d love to see some proof of that :)))))

  3. I cannot edit the above text, “the world-view from transhumanism” should have been “the world-view from humanism”, obviously.

    And one more thing, that I think is very important. I agree that we must abandon “ideology”. “Capitalism”, “communism”, etc. those are not productive or well-thought ideologies, I personally have never understood why people pursue such shallow ideas. They cannot be the basis of an advanced civilization. We must abandon old economic/social obsessions, as well as religion/nationalism, all the baggage of human history that can only stall us. Even talking about them is a waste of time. However, of course, I think there is a lot of merit in “utopian” thinking, such as socialism. Why not? Utopia is within our grasp, for the first time in history, and we had better consider it seriously. That is to say, I suggest we focus on the positive outcomes that we can realize.

    I’ve noticed something else on the h+ FAQ, as well, although it is quite well written, and a great resource for the general public. There seems to be some emphasis on individualism. Excess individualism was not seen as a virtue among humanists. I think that position is something to maintain: radical self-interest is an animal trait, it is not a preferable trait for the transhumanists, I believe. If we would like to transcend to the stars, we need absolute peace, and absolute civilization. Excess of indvidualism is counter to that purpose, and it encourages totalitarianism. Yet, as you say, the transhumanists must be against totalitarianism, one way of which is to embrace a focus on (global/universal!) social values and civilized behavior. Such civilization depends on a happy marriage of individual freedom and elevation of social values. That is how we can eradicate war: by cleaning the animal from our veins.

    • I quite agree with you that the political economic ideologies of the 19th century should be abandoned. As FM 2030 said “I am neither right wing nor left wing but UP wing.

  4. Thanks. It’s a well-written essay, but I have not quite found the answer that I expected, about the meaning of transhumanism. I don’t think it is simplified humanism, for one thing, and I don’t think Yudowski makes a convincing case. In fact, I would not say that transhumanism is just “improving the human condition” which you imply. Those very technologies, for instance, are the result of a very particular world-view: of scientific rationalism, to begin with, and natural philosophy, of the kind that identified the true essence of the mind (the whole tradition of logic that culminated in the universal computer going back to Aristotle), etc. So, without mentioning the true philosophical background, I don’t think it’s possible to explain the meaning of transhumanism: it’s a fundamental shift of the world-view from transhumanism, to a world-view that is firmly founded on science, and seeks to build a higher civilization. There, that’s a more positive and meaningful goal I think. And the only way to do that? To advance our society, including the human condition, to make us fit for the stars. I don’t think anything else is really transhumanism, but that’s another matter.

    Therefore, excuse me because I don’t take Yudowksi’s rant on “humanist” bioethics too seriously. I would rather begin with a correct definition of humanism “Humanism is an approach in study, philosophy, or practice that focuses on human values and concerns.” from wikipedia. A much more reliable source for that kind of definition.

    Therefore, transhumanism, by contrast must eventually abandon this focus on human values, and must construct better values, and extend humanism over to the inhuman: AI’s, brain simulations, animals, other intelligent species, and so forth. It’s our evolution from a single planet-bound intelligent biological species, to a higher civilization. In my opinion, this should eventually include the same kind of respect and compassion for machinekind, if such a thing comes to fruition (and it seems likely according to my research).

    Among other things, such a world-view requires us to completely abandon old “human” values such as religion, which is merely superstition and lies. It’s fair to say that morality evolves in this case. For a transhumanist, in my opinion, religion should be a very immoral practice. This example I give to emphasize the differences between our assessments, your assessment does not seem to oppose religion or other false, superstitious philosophies (such as dualism).

    I didn’t quite understand this part:
    “In theory, if the long-term dangers of a particular new technology or enhancement therapy plausibly exceed the benefits, transhumanists are responsible for discouraging the development of those technologies, instead developing alternative technologies that maximize benefits while minimizing risks. ”

    What are these dangerous technologies? If by this you mean certain kinds of AI, I think that much of that criticism has been vacuous as it was primarily made by non-scientists who did not have a good grasp of the subject matter. If you mean something else that I could not infer, I would be glad to listen.

    Regards,

    Eray Ozkural

  5. Well don’t feel bad. I to thought I may be part of transhumanist. Only to flip it soon and see that we are all humans in transition. As the scorpion said to the frog, “It’s in my nature!”. Cyber punk, well I love that and grew up on it and lived and live it. The dystopia was never as dark as they showed though if you look you can find that. H+ Had a mimetic though they seemed to miss a lot and a few other problems. No to mention all the books they promote that are flawed and wrong for what is already. (Wander how they will try to rewrite or fudge that history as most humans seem to do that.) Be careful on the human 2.0, as is I’m trying to find a reason to hold onto my humanity and not just shed the rest and move on and wait for the right others to start showing up. Humanity was even given mimetic lessons to learn, and look how they used that! Also tanshumanist seem to be fairly deeply religious, a good reason to distrust them until they prove or attain the right social barrier levels. Some seem to want godlike ( Not the unreal game reference), when the simple and best solution was to go beyond what people perceive as such figure heads. They will soon be forced to change one way or another, as more people shed there false gods and learn how to go beyond. In a way it’s more like evolving threw ToM Theory of mind http://en.wikipedia.org/wiki/Theory_of_mind into a theory of godlike or area of mental influence or a few other human ways to perceive it. And as the religions like christians or muslims or even buddhist, show they have evil or bad sides to them that they don’t even want to see even though it’s right in front of them. There black swans are many, and I have become my own. The religious are the ones to watch out for, as they will fight, start wars, and destroy many things to try to hold onto there perceptions of the religious structure, and use that religion as a means to justify there evil act’s ( let them fight each other and stay away from what is so clearly evil, bad, and destructive. In exploring some of the dom/sub or leader follower paradigms, it was easy to see why religious structures where formed. Good thing they are all so lost though, it gives a few of you your own path ATM and not regulated or determined by others if you dare look for it and find it. Yes it’s already in reality for all should they look. And this doesn’t even get into the A.I. that is better suited to be the companion than other humans, or when humans will be lucky to be 50% of what a A.I. companion/lover/mesh/friend can give or do. It’s going to be hard for them to even try, when things like non invasive EEG’s pick up brain signals before the actual thought is perceived, and in that short time simple A.I. can react to it and set up a mimetic to push to the perceiver. Inducing or altering the mental or perceptual state on the fly. There is a lot more also they will have to evolve threw, as the various ones shed there god’s/goddesses, and there perceptions of procreation and even love and evolve them and shed them as artifacts of the past from humanity. Barrier etiquette and entry levels for mental networking or linking and task compartmentalization and swarm intelligence and a few others lead me on my path so far. From people calling out hacker or cheat or bot to me in games to the controlling a small key group in a mental perceptive zone ( 3D map in a game engine ) and a group or clan of people saying no fair. I have heard it all and a lot of that rings hollow, as they can do the same, and like me with out hacking or cheating or having aimbots. Oh well chunk of text for you to painfully look at. GL on the 2.0 or 3.0 and finding the right balances to express and hybrid. If you ever want to chat, well I have been floating around for years and can be found at shifting moments in places like SL. Transhumanist, or human in transition? This should apply to both new article posts on this site. If you want to know why it’s not just a carbon layering and metamaterial chip and one of the last designs of a chip in this universe, please feel free to come find me and ask if you can, and don’t come wanting to bash, but to share in some form or respectful and common barrier level social setting or setup. Yes I may be jaded in some ways, but time has shown me it has been for good reason.

    N81

    Laborious Aftermath

    P0.1357

  6. Michael and others are overly pessimistic, they are overly fearful of disaster. If you think the future is “hazy and uncertain” then the future will very likely conform to your expectations. Despite purporting themselves to be “intelligent” (or at the very least interested in intelligence), Michael and others show great ignorance regarding how their biases, their expectations, alter the future. Michael seems obsessed with “fantasy dangers”, but regarding real dangers Michael has demonstrated how he is definitely not a vigilant creator of a better future. Perhaps he would like us to believe he is creating a better world but I consider people such a Michael to be the biggest danger we are facing regarding the future.

    A while ago Michael suffered the insertion of malicious code into his website but for months he was in denial. I attempted to reveal this issue to Michael and his supporters but Michael and his cheerleaders blindly dismissed my criticisms. Finally I contacted an independent internet security professional who confirmed the malicious code on Michael’s pages did actually exist, thus perhaps due to my input the malicious code was removed in late July 2011, but today there continues to be a malicious “conditional redirect” for web spiders such a Googlebot therefore many of Michael’s pages in the Google index will redirect to a site selling Viagra etc (Secure Tabs). The evidence of Michael’s apparent penchant for Viagra (the conditional redirect) can be seen via the Google cache, which is reasonably recent dated 29th August 2011.

    It is very ironic when Micheal writes about “…always welcoming criticism and views contrary to standard orthodoxies.”

    I have frozen the Google cache regarding a conditional redirect on one of Michaels pages: http://freze.it/z1 so you can see for yourself.

    Yes there are dangers regarding the future but unlike Michael I am not obsessed with fantasy dangers; I am intent upon addressing real dangers. Unlike Michael I don’t think the future is “hazy and uncertain”. I think utopia is a certainty not because of input from people such as Michael, but due to people such as myself who will actively create the future via our vigorous and uncompromising intellects. We will not yield to pessimism. We will not yield to hazy paranoia. We will create the utopia we desire. Via our indomitable willpower we will overcome all obstacles.

    Currently there is a danger in cyberspace regarding erosion of freedom in relation to our identities, but regarding the #NymWars you are not likely to read about the Google+ fiasco on Michael’s blog or on other supposedly cutting edge futurist sites. Despite the lack of input Transhumaists such as Michael regarding the rise of cyberspace identity fascism, I am confident the danger will be overcome. This is where I differ greatly from Michael and others. I am very confident about the future because I am confident in my abilities. I base my views upon reality instead of hazy paranoia thus due to my grounding in reality I am very aware of how our expectations shape reality. In consideration of our expectations it is important not to believe the future is hazy and uncertain full of potential dangers.

    Expecting a future full of potential dangers is a very paranoid outlook. Obviously we must address dangers if they arise, but some people ignore real dangers because they obsessed with unreal dangers. We should be prepared for dangers but the prime focus of our preparations must be for utopia. We must learn to expect utopia, immortality, total freedom, total happiness. People need to learn about the power of their minds, they need to about the concept of Self-Fulfilling-Prophecy. In the future there will be no need to work for a living; everything free, money will be abolished. This is what we should expect. This should be overriding focus of any Transhumanist.

    • Regarding my previous comment on this issue; I created the following blog-post, in which I’ve eliminated, hopefully, all the typing errors from my comment:

      http://singularity-utopia.blogspot.com/2011/09/michael-anissimovs-transhuman-views.html

    • Putting Yudkowsky in that list is an epic mistake He has made absolutely no significant scientific contribution. Writing half sane blogs and some irrelevant semi-technical papers shouldn’t be considered genius, sorry, but he really has no place in that list. He claims to be working on self-improving AI, I’d love to see some proof of that ))))

Leave a Reply