Sign In

Remember Me

Ben Goertzel Interviews Max More on the Future of Transhumanism and the Optimization of Disorder

Among the many wonderful speakers slated for the Humanity+ @ CalTech conference this weekend (Dec 4/5), one of the most exciting is Max More — who, though still relatively young, is as close to the “father of transhumanism” as one can get.

Way back in 1988, before the future was fashionable, Max co-founded the original transhumanist magazine, Extropy: The Journal of Transhumanist Thought. In the early 90s he founded Extropy Institute, organized five Extro conferences—the first explicitly transhumanist conferences, and founded the English cryonics organization Alcor-UK (originally Mizar).

But his greatest achievements are perhaps on the intellectual rather than organizational side. Max has authored a host of seminal transhumanist essays, forming a key part of the foundation of modern transhumanist thougt, including “Transhumanism: Toward a Futurist Philosophy” (1990), “The Principles of Extropy”, “A Letter to Mother Nature: Amendments to the Human Constitution” (1999), “Technological Self-Transformation: Expanding Personal Extropy” (1993), “Dynamic Optimism: An Extropian Cognitive-Emotional Virtue” (1992), and more recent papers such as “True Transhumanism” (2009. He has also spoken at dozens of futurist conferences, and spread transhumanist ideas through numerous newspaper, magazine, and TV interviews.

I was extremely pleased earlier this year when Max decided to run for an open seat on the Board of Humanity+ (which he won easily), and also when he agreed to give a talk at the Humanity+ @ CalTech conference. His talk title is “How to Optimize Disorder in Today’s Organizations and Tomorrow’s Corporate Selves”, reflecting several of his current interests, including the intersection between transhumanist thought and the business world.

Earlier this week I interviewed Max about his talk at the conference, and his thoughts on the past, present and future of the transhumanist movement generally. Enjoy!


Max, you’re arguably the chief founder of the transhumanist movement, at least as much as that title belongs to any one person. I’m curious for a few of your thoughts on how the transhumanist movement has evolved in the decades since its inception. What has gone right, and what do you think could have gone better, etc.?


The transhumanist movement has reached adulthood, at least in terms of human years. Elements of transhumanism have been around for quite a while: Futurist FM-2030 used the term “transhuman” starting in the late 1960s, although he never developed that explicitly into a philosophy of transhumanism; and cryonics and life extension circles has resonated with transhumanist-related ideas from the 1960s or 1970s. Although I had published transhumanist thoughts earlier in the 1980s (for instance, in Biostasis, the newsletter of the first real UK cryonics organization), I think the real beginning was when we started publishing Extropy magazine in 1988. Even so, it was another couple of years until I explicitly stated and codified transhumanism. In 1990, I wrote and published both The Extropian Principles” and “Transhumanism: Toward a Futurist Philosophy” (a version of which later appeared in the Fall 1994 issue of the humanist magazine Free Inquiry, as “On Becoming Posthuman”).

So, I take your question to ask about what has happened over the last 20 to 22 years. Back in the early days, the biggest challenge was simply to get transhumanist ideas taken seriously. Most of the public were unfamiliar with the ideas of radical life extension, cognitive augmentation, emotional self-modification, and many other transhumanist themes. Although computing power was continuing its steady pursuit of Moore’s Law, capabilities were still limited and earlier claims of the imminence of genuine artificial intelligence were looking dubious. One of the major changes since then is that our ideas have become much more widely accepted, or at least deemed worthy of serious discussion. We see far more dialogue with non- and anti-transhumanists, and much more serious academic engagement, especially in philosophy and bioethics.

At the same time, for both good and bad, we’ve seen fragmentation. Extropy Institute (like Humanity+ today) took its mission to be the development and dissemination of transhumanist ideas and culture as a whole. In the 1990s and 2000s, many groups emerged with their own more specific agendas, such as life extension advocacy, cognitive augmentation, the singularity (although some specific-focus groups or publications had been around earlier, such as the cryonics organizations and Claustrophobia newsletter–whose interests encompassed space migration, intelligence increase, and life extension). We saw a fragmentation also along political and economic lines, with increasing and unfortunate polarization, a trend that may have recently started to reverse to some extent.

Since the early transhumanists faced the major task of being heard and being taken seriously, the main goal was to get the ideas out there and to break through overly conservative thinking. We had to emphasize the possibilities and to convey our vision of possible human futures. The following years have seen a clear shift toward two things: first, more attention to converting transhumanist concepts into actionable policy (although some of that existed from the start) and, second, a growing emphasis on risks, threats, and dangers arising from the development of the technologies essential to the realization of the transhumanist vision. In my view, while part of this shift is salutary, it has gone too far.

Too many transhumanists seem on the verge of joining with advocates of the anti-progress advocates of the precautionary principle. Perhaps they have seen how so many other organizations and movements (especially those to do with environmentalism) have flourished by emphasizing or exaggerating or inventing threats, and so seek consciously or unconsciously to emulate them. My work on the Proactionary Principle was motivated by the need to balance the social forces that seek to retard vital progress, but is increasing relevant when applied internally to transhumanism.

One area there the development of transhumanist thinking has been less than optimal, in my (no doubt quite unpopular) view is the ever-greater overemphasis on the idea of a technological singularity. While various aspects of the concept certainly merit research, overall I see the singularity as an intellectual black hole sucking in attention and distorting thinking in the area. I talked about some reasons for my view in a talk in London earlier in 2010, and I’ve blogged on it and I’ll detail it further in a book chapter next year.


Interesting…. I’d like to dig a little deeper into your views on the Singularity. I’m wondering which of the following is the case:

A) You find Singularity scenarios plausible, but don’t think they deserve the extremely high degree of emphasis they’re getting lately in the media and certain quarters of the futurist community


B) You find Singularity scenarios implausible as possible outcomes for humanity in the next century


I think a Singularity is possible, but much less likely than a series of partially overlapping surges of technological advance. I gave some reasons for that in my H+ talk in London earlier this year, and will have a chapter on it in my book on The Proactionary Principle. So, I doubt we’ll experience anything that I would call a Singularity, for reasons that are technical, economic, and organizational. In addition (and partly because of that), I also think the Singularity idea pulls in too much attention.


Hmmm…. As for me, even though I find a Singularity event this century plausible and even quite likely — I nevertheless tend to agree with you that it’s not the most useful thing to focus on in one’s thinking about the future. The changes that technology is bringing us are equally important and interesting whether they cause a Singularity or “merely” a Surge; and by its very nature, a Singularity is something that’s very different to make detailed predictions or analyses about….




That brings to mind another point. One of the things that worries me sometimes about certain strains of Singularitarian philosophy is the idea that “Well, the Singularity is inevitable, so we don’t really need to do anything about it.” But of course this perspective is not tied to the Singularity per se. I mean: One could argue that advanced transhumanist technologies are almost surely coming, no matter what any small group of people says or does about it, and society is of necessity going to adapt. This begs the question: What use are explicitly transhumanist organizations like Humanity+, if all this stuff is going to happen anyway? What are your thoughts on the role of the transhumanist movement in the coming decades?


I would not dispute the view that advanced transhumanist technologies are coming, bar some major disaster or global turn against progress. But how are those technologies going to arrive? How soon and in what form and controlled by whom? And how well will we will, individually and collectively, adapt to the changes they imply? Transhumanist organizations such as Humanity+ can play a valuable role in envisioning a wide range of possibilities and in thinking critically about better and worse pathways to those possibilities. We can help people think ahead, anticipate, and prepare for greatly extended life spans, enhanced cognition, and unprecedented options for selecting our somatic and psychological nature.

Transhumanist organizations like Humanity+ can bring interdisciplinary thinking to the implications of these powerful emerging technologies. Governments and corporations typically have their own agendas and internal and external pressures that limit and distort their thinking about crucial issues. Transhumanist organizations have the potential to improve both critical and creative thinking about technological possibilities, in forums that are not beholden to particular financial, economic, or political interests.


OK, now let’s turn to your talk at the upcoming conference. Your talk at the Humanity+ @ CalTech conference this weekend is titled “How to Optimize Disorder in Today’s Organizations and Tomorrow’s Corporate Selves.” I wonder if you could say a few words about what this means, and also how it ties in with your earlier work on transhumanist philosophy.


Transhumanism can be seen as a certain perspective on the growth of order out of disorder, from our current perspective, looking into the future. In my own version of transhumanism, we are seeking the growth of extropy. [NOTE: “Extropy”, an important concept introduced by Max More in the 1980s, is defined as “The extent of a system’s intelligence, information, order, vitality, and capacity for improvement.”] Organizational extropy is a crucial part of reaching a posthuman future. Human beings co-evolve with technology, and organizations are a social part of that technology. If we’re to understand the past, present, and possible improved futures of human beings, we must understand the range of potential organizational architectures and processes and their shaping factors. In the talk, I will look at some of those factors, but will mainly focus on how to achieve a dynamically optimal balance between order and disorder, centralized control and individual/team autonomy, and stifling stability and uncontrolled change. The connection between the topic and transhumanism becomes more clear when you consider that, further in the future, the distinction between humans and organizations may become more fuzzy. Disciplines that seek to understand the individual person and those that seek to understand organizations may help to illuminate each other, and to throw light on the possible natures of posthuman beings.


The conference is going to cover a huge variety of different technologies — life extension, body modification, mind uploading, artificial general intelligence, quantum computing and a lot more.. I’m curious what technologies that seem to be on the horizon for the next 10 years excite you most? Which of these areas (or maybe something different) do you think has the greatest potential for massive progress in the next decade or so?


I’m hoping for major progress with memory-enhancing pharmaceuticals and other cognitive enhancers. These seem to be on the cusp of moving from research to products, just in time for an aging population. A particularly promising area showing rapid progress is that of social knowledge technologies, including decision markets and prediction markets. I have high hopes for artificial general intelligence, but history causes me to restrain my expectations. The next ten years should see some impressive advances in augmented reality and personalized information gathering and filtration. Some other obvious areas for great progress over the decade ahead are supercomputing and robotics.

After a slow start, personalized medicine and genomics may be poised to start generating major benefits. Genomics is especially benefiting from exponential improvements in sequencing, but that’s only one part of achieving personalized medicine. Synthetic biology should be another hot area. I’m not terribly optimistic about life extension in general over the next ten years. Obviously, that might change if we make further progress in the public mind, resulting in a sizeable boost to the effort and resources devoted to research. Some specific aspects of life extension technology look more promising for the decade ahead, such as the continuing advances with regenerative medicine, including growing tissues and organs.

On the non-technological front, I’m intrigued and excited about the prospects for improved health and extended mean longevity from a modified Paleolithic (or primal) diet and exercise regimen (also called “evolutionary fitness”), including intermittent fasting (IF). I call this “PaleoPlus”, because approximating a Paleolithic diet of animal foods, vegetables, fruits, and some nuts and seeds but no grains or added sugars doesn’t have to mean rejecting genetically modified foods or new findings from nutritional science that could further optimize individual diets.


An interesting trend I’ve noticed lately is an increasing influx of young people into the transhumanist movement. For instance – along side the oldsters like you and me — Humanity+ has two very capable teenagers in powerful positions: Board member and Program Coordinator, Tom McCabe; and Assistant Director of Research Bryan Bishop. Those two are exceptional of course, but I think there are a lot of young people with similar interests, since the power and promise of advanced technology is becoming more and more obvious via media and everyday technology. What advice would you give a young person interested in dedicating their life to transhumanism — i.e. to helping themselves and others to advance toward a wonderful transhuman condition?


First, I’d say don’t dedicate yourself ultimately to transhumanism as a ideology. Transhumanism is an excellent framework for understanding the world and the futures we might create, but it isn’t invulnerable to dogmatism. So I’d urge young people (and older people!) to dedicate themselves not to defending transhumanism as a fixed doctrine, but to exploring how transhumanist thinking can change their own lives and those of others for the better. A related recommendation is: don’t wait for “our friends from the future” to solve our problems. Those “friends” may be seen as nanotechnology, artificial super-intelligence, or some other technology.

One way in which, too often, I see people waiting for their friends from the future to solve our problems is in personal health maintenance and longevity efforts. Even some of those who strongly advocate life extension still fail to take relatively easy steps to live longer and more healthily. So, one piece of advice to enthusiastic young transhumanists is: take care of your own health. Building on that, I would urge them to develop themselves emotionally as well as intellectually.

In addition, give serious and regular thought to making money, preferably doing something you love that’s pro-transhumanism. Building personal wealth makes you independent, allows you to afford more expensive early-stage technologies, and means you can afford to fund worthy research efforts. Not all of us excel at making money, but it should never be disparaged in favor of more intellectually “pure” choices.


Which of your current projects would you particularly like readers to know about?


Along with Natasha Vita-More, I’m editing a collection of 48 essays called The Transhumanist Reader. We want to fill a gap: the lack of a single, comprehensive, and authoritative volume covering transhumanist thinking both across time and across topics. The volume includes plenty of classic pieces, but also 21 essays that are either entirely new or heavily revised. We’re currently looking for a top academic publisher for the book. Failing that, we will get in published one way or another during 2011.


Sounds fascinating, I look forward to reading it. It will be great to have something like that to recommend to young people or other futurist newbies who want to know what it’s all about.


Also, starting early in 2011, I want to complement that effort by spearheading the development of a transhumanist knowledge database/reference for Humanity+.


Yes, that will be great. A book covering historical and contemporary contributions will be important, but it’s also important to have a “living document” that reflects the whole scope of transhumanist thought and action as it unfolds. Keeping such a document alive and helping it grow seems like an excellent thing for Humanity+ to be involved with. To borrow a phrase from your forthcoming talk – hopefully the database can help the transhumanist movement (and the world as it becomes more and more transhuman overall) optimize its own disorder!!!

Well, I think I’ve taken enough of your time now – thanks for a wonderful interview, and I look forward to seeing you in a few days at the Humanity+ @ CalTech conference.


See you at CalTech!



  1. You can’t easily call this “evolutionary fitness”.
    Surely, that term is already taken!

  2. hahahah!


    why dontcha actually contribute instead of looking like a moron… the real fathers of transhumanism are those who are working hard in the lab!

  3. The transhumanist movement is doomed. No matter how sure these idiots are of themselves and the coming transformation, they are a laughing stock.
    Fools. I feel sorry for them. They waste their time on what?
    I know. Do you?

  4. He does emphasize having a life outside of transhumanism, like working to stay healthy, acquiring useful skills to make money, and investing your savings to build wealth.

    Though I don’t know what to invest in right now, given the current economic chaos and the prospect of secular deflationary stagnation in the U.S. like Japan’s.

  5. “The transhumanist movement is doomed”
    same ideas uphold by same transhumanist are doomed , the movment is growing.

    transhumanism is a continuation of the good in humanism , it is the modern answer against post modernism.

    mabe a tran modern peroid ?

    i think the computional reality view that same transhumanist have is doom but science and the artificial centric view is not over.

  6. I could and should have written the above myself (and find intriguing how very different backgrounds may generate such a high degree of convergence).

    But, hey, Ben chose to interview Max instead. :-)

  7. Thank you, Mark. (I’ve enjoyed and appreciated a number of recent comments from you in response to nastily anti-transhumanist blog posts and news articles.) I do indeed have a life outside transhumanism, and do more than most transhumanists or non-transhumanists to maintain my health. In reply to a related critical comment by a cowardly anonymous person: Not everyone is suited to working in a lab. I know I’m not. Only a fool thinks that no one else can contribute value to society.

  8. Max More expresses a very balanced, level-headed attitude in this interview.

    I think “anonymous” is way off base with his comments. Perhaps this is why he chooses to remain anonymous.

  9. With so much being said about the Singularity, it would be appropriate to take a pause and turn to Max More for a good dose of logic. Why? Because AI and nanotechnology are technologies: they are things or products. There needs to be more questions about how and why we are building them. We need to get outside the technologist’s mind and into the mind of crisp, comprehensive thinking of people like Max More.


Leave a Reply