The Singularity and Transhumanism

Recently Adam Ford conducted a fascinating video interview with transhumanist pioneer Max More (the latest in a long and wonderful series of video interviews by Adam). We are pleased to present you with both the video interview itself, and an edited textual transcript. Enjoy!

I’ve been in Transhumanism for about as long as Modern Transhumanism existed.  There had been various precursors and it’s hard to say exactly where things start. But back in 1990 I wrote an essay called “Transhumanism: Toward a Futurist Philosophy.”

People have used the word Transhuman before, but in the modern sense, nobody really created a philosophy of Transhumanism. So I was really the first person to do that and developed what we call “The Principles of Extropy,” which is a particular Transhumanist Philosophy.

And I’ve had a very long interest in Life Extension. I started practicing life extension and being intellectually interested in the ideas since well before I stopped growing. In my early twenties I started the first Cryonics Organization in England and I’m currently the President and CEO of the Alcor Life Extension Foundation, which is the world’s leading cryopreservation organization.

The Singularity

The Singularity to me seems to be a little bit too ‘singular’. It seems to assume that all these different technologies converge and take off all at the same pace. Whereas in my view, we’re more likely to see a Surge which might trail off, might slow down—in fact, maybe a series of surges.

Information Technology may improve at a different rate than biological technology, or nanotechnology and so on. I don’t think these all are necessarily coordinated. I can see the argument for that, if everything is driven by a super intelligence, then perhaps that drives everything else. But I think that rather ignores economic and organizational factors and consumer adoption factors and so on.

So I tend to think that what we’ll see is the same as what we’ve seen historically but sped up, which is that certain technologies will take off over the accelerated pace and it would look like we’re hitting a Singularity. But then they may actually slow down for a while, and then they may take off again a typical S curve, and there may be a whole series of these rather than one singular event.

I tend to think that the idea of ‘a Singularity’ is a little bit seductive, it’s very appealing and it has certain historical, religious and mythical reverberations that could attract people to that idea. But I think it may be a more complex picture that that.

The Singularity as a Surge

By a surge, I am really meaning that a part of what’s usually called the Singularity where we see a rapid increase in the growth rate of a technology, a kind of an exponential growth. But it kind of includes the idea, if it’s a surge that implies that it doesn’t last forever and it kind of gradually tails off or it flattens off. Whereas a Singularity, if you look at the curves, they seem to be exponential until they become vertical and you get an infinite rate of change in a finite amount of time.

So the idea of a surge is that yes, we do feel these sudden rushes forward, like with, for instance, Aerospace Technology, we’ve got some very rapid advances. From the Wright Brothers onwards, there were some rapid changes. And now, it’s pretty much slowed down. Even though there is some continued progress in military aircraft, still we don’t really see planes that get any faster. We’re not seeing Mac-20, Mac-50 planes. It’s pretty much slowed down. So they had a surge in the Twentieth Century and that’s kind of peetered out now.

Tipler’s Omega Point

The Omega Point idea as developed by Frank Tipler, the Physicist, does build on Teilhard de Chardin’s idea of… this idea of a kind of intellectual realm driven by a computing power – very much on the lines of the Singularity view, that reaches an infinite state where essentially every atom of matter in the universe eventually becomes part of a thinking device. And in Tipler’s particular view, that seems to become a single mind.

I don’t know why even if matter was converted entirely into thinking matter, I’m not sure why a single mind would be the result. But I think it’s been driven again by the Judeo-Christian tradition of a singular mind. And so, in his view, we have almost infinite computing power and that could, in principle, resurrect everybody who ever lived by simulating all possible people who ever existed.

In his idea, they would come back into the simulated reality which would be much like a heaven. So it seems very conveniently to fit the Judeo-Christian world views. I’m not sure it’s going to happen that way at all. But I think it broad outlines, this general picture is kind of plausible, this idea that will increasingly turn unthinking matter into thinking matter, and if you look long enough ahead millions, billions, trillions of years into the future, it seems a fairly plausible picture of the general direction—the matter will essentially be turned into the thinking matter, into consciousness.

Not in any mystical sense, simply that we’ll be using lots and lots of matter for computing, which means our minds can be vastly greater, vastly more powerful, and the world will be unimaginably different.

Religion and Transhumanism

I think that people when they look at the future, if they do accept this idea, there’s going to be drastic changes and great advances. They will necessarily try to fit that very complex impossible to really understand future into very familiar mental models, because they want to put things in boxes, they want to feel like they have some kind of grip on that. So I won’t be surprised to see Christian Transhumanists, and Mormon Transhumanists and even Buddhist Transhumanists, and every other group will have some kind of sets of these ideas; they will gradually accept them but they will make their future world fit with their preexisting views as to how it will be.

I think the essence of Transhumanism is not religious. It’s really based on Humanism—it’s an extension of Humanism – hence “Transhumanism” – it’s really based on the ideas of reason, progress and enlightenment, and a kind of secularism. But that doesn’t mean it’s incompatible with trying to make certain of the Transhumanist ideas of self-improvement, of enhancement. I think those are potentially compatible with at least non-fundamentalist forms of religion.

Posthumanism – A Challenge to Humanism?

Back in 1993, I wrote an essay called “On Becoming Posthuman,” and I wrote that for Free Inquiry Magazine which is probably the premier humanist publication in the United States, the invitation of the editor.

Another very interesting experience because he told me that there was a huge response. He got more letters responding to it than he had in any previous articles. And he said they were pretty evenly divided, 50-50, like half of them said, “This is really fascinating. I’d like to know more about this. It sounds like a good idea.” And the other half of them thought “This is a terrible idea!” – because it was challenging “Human”-ism. It was challenging the idea that we should just be, you know, the best humans that we can be. Because we’re saying we could become something more than humans.

So those people who sort of clang on to the current idea of the human, whereas I think the other half understood that we were really building on Humanism. And there is this tradition that in Humanism, it has this idea of progress of reason of looking past all the artificial boundaries, between the racial and gender boundaries, and so on. It’s all about improving everybody’s condition.

So I see Transhumanism as a direct decedent of the Enlightenment humanist project of challenging every orthodox belief, challenging everything that we currently accept and saying, “Why can’t we do better?” And pushing that not just to improving society, not just improving education, but asking fundamental questions, “Why can’t we improve Human Biology? Why can’t we change our Genome? Just because it’s the way it is doesn’t mean it’s as good as it can be. Why do we age and die? Why can’t we do something about that?”

So, to me, Transhumanism is the natural successor to Humanism in a positive sense.

Habit-based Objections to Transhumanism

People tend to make apparently a clear distinction between treating a disease or dysfunction, if we’re getting a cochlear implant or a new heart valve on one hand and enhancement on the other. A true enhancement would mean essentially becoming something better than you are or a higher functioning much better than a human being is. But that distinction is really pretty blurry the more you look at it. And I think really it’s a matter of habit and what you’re used to because if you’re going to have a heart valve replacement you’re just getting back to a very familiar baseline, whereas a more radical enhancement is something new.

However, I think, what we’ll see is that as these become available and tested and safe, the people will very quickly forget their objections. It would be very much like, say, open heart surgery. When that was first introduced, most people.. well you wouldn’t realize this now, but most people were horrified by the idea. It may not be really surprising if you think about it, we’re talking about cutting your chest open and sticking hands in there and moving things around and sewing things up. That actually is fairly gruesome idea. But now we think, “Oh okay, if I have to have it done, you know, I’m not looking forward to doing it but everybody does it.”

I think if we have Life Extension Technologies, if we have Cognitive Upgrades, people will very quickly realize the benefits and advantages of those, and their apparent in-principle objections will very quickly disappear.

A Historical Perspective on Extending the Self

It’s certainly true that we’ve always extended ourselves physically and cognitively. We’ve used external calculating devices like the Abacus for a long time. We’ve used various kinds of artificial teeth actually for many centuries. So yeah, and stones to increase our ability to cut and to stab and to build things. So, yes, we’ve extended the body and the mind for many, many years.

I don’t really call that Transhuman. I tend to reserve that term for something that really makes a fundamental change in the human condition. And by the human condition, if that’s not to be arbitrary, I think it has to be defined by our genes, because that’s what separates us from other species. So if our genes then lead to a certain kind of brain development, to certain ability, and a range of visual acuity, of auditory perception, which are limited by the way our genes have built us. Then when we start talking about directly altering those genes to give us abilities beyond that, or implanting devices or doing some of the reengineering of the human body or brain, that allows us to having perceptual cognitive and emotional ranges beyond that of any human being, then we could talk about a Transhuman.

So wearing eye contact lenses doesn’t really make you a Transhuman, but it’s all part of the same process of augmenting ourselves.

Essential Transhumanism

In the Humanity + Conference, I talked about Essential Transhumanism and the reason I chose that was that this was the first Transhumanist Conference in Asia and it seemed to me that many people may not have a clear idea of what Transhumanism is. They’ve heard many talks about various particular aspects, certain technologies or these robotics or AI or Biotechnology. But they may be wondering, well, how does this all fit together? What exactly is Transhumanism?

Since Modern Transhumanism started in the late 80s onward, it has really flowered and grown and developed in all kinds of many different directions. And so it can be confusing to actually isolate what is the core of that. So I was just trying to get to that, I was pointing out how certain ideas have been emphasized, perhaps overemphasized, and people were thinking that was Transhumanism. But, really, the core of Transhumanism is the idea of using reason, science and technology backed by goodwill to overcome fundamental human limits—to live longer than we’ve ever lived, to become smarter, to become emotionally better than we’ve ever been. That is the core of it. All the other things are details.

The particular technologies are not the essence of it. Maybe we’ll use Nanotechnology, maybe we’ll use some other method. Maybe we’ll use Biotechnology. So it’s good to have arguments about which technology is most effective and most promising, but that’s not the core of what defines Transhumanism. It’s defined by its values, of progress and reason and optimism and of challenging limits, and by using and pursuing that relentlessly. That’s really what Transhumanism is – essentially.

Basis of Objections to Transhumanism

There are certain numbers of objections that come up frequently especially when we talk about Life Extension or the typical ones “what about overpopulation?”, “what about resources?”, “won’t I become bored?”, “won’t dictators stay on for millions of years ruling their countries?” Other people complain that it’s somehow unnatural even though human nature has always been about modifying ourselves and changing ourselves – it’s a whole stead of objections that come up. To me, they’re all based on a combination of fear and lack of imagination.

Essentially what people usually do when they think about these distant scenarios, they tend to project how we are today into the future. So, for one thing, they talk about Life Extension bizarrely they automatically think that we’re talking about living, getting older and older more decrepit and of course you wouldn’t want to live like that. But we’re not talking about that, we’re talking about living youthfully and vigorously, in fact, better than we’ve ever been.

Oh well they project other things remaining the same, not having many different technologies, nobody means of dealing with the environment. The fact is that we’ve had environmental crisis throughout human history. Back in the Early Industrial Revolution, the British were burning all the forests for wood. But does that mean there are no trees left on planet now? No, because we go through cycles. We have new technologies, new ways of producing energy, and the same way we would respond to those challenges.

So I think what Transhumanists are pretty good at is thinking along multiple tracks. They can think about multiple changes at a time, not just one single change. That’s not the way the world works. So almost all these objections are based on this false idea that one thing will change and nothing else will. Well, they just can imagine other possibilities, they can’t imagine new technologies changing the rules as they have always done and will continue to do so.

Benefits of Transhumanism

One of the most attractive things about Transhumanism I think is that it’s a very thorough going philosophy of improvement. It’s about self-improvement. It’s about improving society, about improving the economy, about improving all our possibilities. So it’s fundamentally a very progressive philosophy.

It’s very popular especially among young people to make a big deal of being anti-racist and anti-sexist. Transhumanists find that a little bit of a ‘yawner’ because of course that’s very obvious—if we’re talking about making medical alterations to ourselves then differences in skin color and gender seem pretty trivial by comparison. So it’s attractive because it really overcomes those rather artificial distinctions and looks well beyond to – “What can we individually and as the species do to vastly improve our condition?”

So it has a really visionary component. It’s not just about what are the problems of today and tomorrow and even in the next five years. It looks far beyond that. So that has to be attractive. It’s bringing back a vision into society that’s been lacking for some time. I think we become so focused on short-term problems and complaining about how things are and that we’ve lost a lot of that vision.

In the enlightenment, that was the thing, they came out of the Dark Ages when nothing much could have happened for a thousand years. And so it’s a very exciting view that, you know, the with the scientific method we can actually improve life. That’s got a little bit forgotten in recent years. So Transhumanism I think of as a new positive and science and reality based way of approaching that improves human future.

Short-term Thinking

I think there are multiple causes of short-term thinking. One of them is fundamental progress of the human constitution and something we’ll have to become by making those fundamental changes, which is that our brains evolve to make short-term decisions.

In the early days for most humans in the history we didn’t live very long. We lived twenty or thirty years, if you’re lucky, and then you got killed by a tiger or you starved to death or you caught a disease or you fell off a cliff. So our brains evolved to make very short-term decisions. When facing a tiger, should I run this way or that way or should I duck down or run up a tree? We didn’t have agriculture even so you didn’t have to think really a year ahead; you looked about for the next chase and finding a warm cave. So it’s very short-term thinking and that was how you survived. We now live in a very complex technological society with lots of interdependencies and we live a lot longer than we did and we can expect that to continue.

So the brain is just fundamentally not well suited to long-term thinking, which is why I think we won’t have fundamental solutions to these problems until we change the human constitution itself. On top of that, of course, you have all kinds of institutional imperatives. You have people who need to make short-term profits to satisfy their shareholders and that tends to make us think short-term. You have politicians who want to get reelected in two or three or four years. And so they think about maximizing the benefits that they apparently produce in those few years and it doesn’t matter what the long-term consequences are. So you may be quite willing to hand out lots of money to these people not to give about the long-term debt they’re producing. So these are several factors that lead us to short-term thinking.

So Transhumanism is somewhat unique and really stretching out that thinking horizon—how can we think several decades or centuries ahead? And that takes a lot of mental energy to do because you don’t really know how things are going to work out. You can only look at general trajectories and trends. But by imagining those possibilities, of course, you create those possibilities. If you never think about the long-term, then you will never really head in that direction – you’ll just be bumped around by the current forces.

Does Technology Cause Dehumanization?

I think the idea that technology causes dehumanization is actually the reverse of the truth. Suddenly some technologies can be abused. I think people who lock themselves in front of a screen and play the same video game for eighteen hours every day, that may be dehumanizing them because it’s narrowing them down. It narrowing their relationships, they don’t interact with other people and they don’t have a wide range of activities. So you can abuse it.

But, in general, it seems to me that if you’re at the mercy of nature, if you’re stuck in a one particular environment, you can’t meet anybody else, you’d become insular in your tribe and everybody else is seen as an enemy, you’d become very narrow in your thinking. The long run of technological progress and of economic development means that we can afford to be generous; we can afford to treat people outside our immediate tribe and as part of the same community. The level of violence is declined. There’s a number of recent books have been arguing. If you look at the actual trends, violence is massively declined over the last few centuries as we become wealthier and smarter and more civilized.

So, yes, somehow it’s always the very latest technology that’s dehumanizing. So I think in another fifty years people will look at Genetic Engineering as “No, no, it’s fine,” but it will be some other technologies would be dehumanizing.

So it’s really a matter of the unfamiliarity combined with the unfortunate tendency of science fiction especially in movies to always portray dystopia, these dehumanized futures, just because that’s lazy and easy to portray. It’s much harder to portray a future where technology has mostly beneficial effect. It’s much easier to show these Robotic Cyborgs that go around trying to kill the rest of humanity.

So unfortunately that feeds that kind of view of dehumanization of technology.

Perils of the Precautionary Principle

In Europe and in America both, especially in Europe, a lot of technology and environmental decisions have been based either explicitly or implicitly on something called the “Precautionary Principle”, which essentially says that you should not employ any new technology – shouldn’t allow any new technology, unless you can prove that it’s completely safe beforehand.

There are versions of it but that’s the basic idea. Whereas one person summed it up quite nicely, don’t ever do anything for the first time, which is of course an absurd principle and impossible to actually act on – we can’t guarantee that any technology would have no bad effects. In fact, you can guarantee that they probably will because we can’t foresee everything.

So if you try to act on this very cautious principle, what you do is end up preventing any kind of progress. And that itself causes great harm. That’s why I called it The Paradox of the Precautionary Principle – by being cautious, you end up causing more harm than if you didn’t you wouldn’t be so cautious.

The Proactionary Principle

I developed something called the “Proactionary Principle”, which is a much more balanced decision-making principle. It takes into account a lot more effects and it requires you to think very comprehensively, to use the best knowledge we have of decision-making, of probability, of risk analysis. It’s really a set of ten principles all brought together to encourage optimal decisions about technology and the environment. So it’s much more progress-friendly – the idea is to recognize the value of progress and to take proactive measures to progress, while also thinking of the possible down sides, planning ahead and minimizing those problems because you can’t fully eliminate them.

It’s really based on the idea that you cannot understand the best decisions to make until you start taking actions. So you may start small with experimental steps. But you can’t wait until you know everything about the outcome because you’ll never know the outcome unless you start actually taking action. You have to learn by doing.


Cryonics is essentially the practice of, at the point of legal or clinical death, which is not the same as biological information death, preserving somebody and taking them down to a very low temperature and minimizing freezing damage, with the idea being that our current criteria for death is historically transitory. Just as people died fifty years ago, now we could bring them back with better technology. In the future, people who died today of cancer or heart disease or aging itself could be revived.

What we do is we preserve people in an unchanging state so that they can go to a future time where they have much more advanced medical technology and they can have a second chance.

Why Choose to be Re-Animated?

This appeals to people who enjoy living and don’t see any reason why they should give up on it just because the heart gives out or cancer decides to kill them or aging gets them. That’s a very arbitrary thing. They want to choose how long they live. So what they do is they will come back, not as an old person, not as somebody with that disease, but as someone rejuvenated and at that peak of fitness that they’ve ever enjoyed. So why wouldn’t they want to come back? As long as you enjoy living, it seems like you want more of it and if you can be back in a healthy young body, but with the wisdom of the age you’ve accumulated, you’ve got a pretty good advantage there for a second life.


  1. granularity … just because things that “were supposed to happen” in 1980 still haven’t happened in 2012 isn’t a biggy. that’s the nature of predicted exponential progress. haven’t read mambo chicken but thanks for the recommendation. i’ll do so.

    i reckon the stagflationists are just modern-day malthusians …. but a couple of decades will let us know what side our bread is buttered on, or whatever.

  2. And then we have some high-profile Stagnationists, like Tyler Cowen, Peter Thiel and Neal Stephenson. They make a case that not a whole lot of the “futuristic” stuff will happen for many generations, and possibly never, for complicated reasons.

    Just dig out your copy of Ed Regis’s book Great Mambo Chicken, published in 1990, and read it in the light of reality in 2012. Regis’s book basically describes one failure after another from space colonization to “nanotechnology.” The transhumanism of 1990 became the paleofuture in a single generation.

  3. Nice article.

    Good observation about people having trouble thinking about the future because they assume one change happens while everything else stays the same. Fortunately, we can all get over that problem–and the short-term-thinking problem and many other problems–by talking and thinking about them.

    The idea of the singularity being conceived of in a “singular” way isn’t much of a fault, really. It’s just a matter of granularity in conception. Sure, there will be surges and many technological S-curves and all that. But, for a first approximation of the situation, the concept of a singularity and a single hyperbolic (or whatever) technological curve seems to be the best starting point for thinking about things. After all, it’s the concept that has singularly caught on as a model of the future.

  4. Interesting piece. And, of course, since I’ve also suggested (albeit in a less articulate way) that the singularity may not be singular, I find it reassuring to find that so remarkable a mind as Max More’s might come to similar conclusions.


  5. Link to the video interview with Max More:

Leave a Reply