Interivew with Nick Bostrom

Nick Bostrom at AGI 12 by Adam A. Ford

Interview by Adam A. Ford – subscribe to his YouTube channel is HERE

Nick Bostrom:  I’m Nick Bostrom. I’m a professor here at Oxford University, and I direct the Future of Humanity Institute, which is a multidisciplinary research center that tries to bring careful thinking to bear on the really big picture questions for humanity. We’re looking at things like are there threats to the very survival of the intelligent species, are there ways in which future technologies could change the basic parameters of the human condition in some way, and what are the ethical perspectives from which we should evaluate such possible changes, and also mythological questions, like how can you actually research these type of topics in a rigorous way.

Keep Calm and Reduce XRisk

Nick:  It’s actually a rip off of an old war poster that the British government used to circulate during the Second World War. I think the original message was, “keep calm and keep going,” and it has that kind of quaint, old feeling. Now, Anders Sandberg dug this up a few months ago    a bit longer, maybe a year ago    and we thought we could plug in “ex risk” instead of “keep going” there. But since then, it’s become popular, this war poster. You can see it in all the shops now. There are coffee mugs with the same thing, “keep calm” and something. It’s no longer cool. Now it looks like we’ve just ripped off the latest kind of coffee mug. Unfortunately, we might have to take that down at some point because it’s become too popular.

There is a serious point there, which is that with existential risk, what really needs to be done is not freaking out over it or being alarmist over it, but actually to take them seriously and try to understand what are the concrete steps that we can take that would best reduce existential risk.

Existential Risk

Nick:  An existential risk is one that endangers the survival of Earth originating intelligent life or that threatens to permanently and drastically destroy our future potential for desirable development. It’s different from all the other kinds of catastrophes that have happened many times through human history, even the worst horrors, like world wars and famines and pandemics. They have been hugely destructive for the people who were immediately impacted by these, but they haven’t permanently destroyed our future, whereas an existential catastrophe is different. It’s one that would destroy all future. It wouldn’t just be bad for the people who are alive at the time who might be killed by it, but if everybody goes extinct, then there is no future as well.

In terms of evaluating the significance of existential risk reduction, a lot depends on how you take into account the value of these future generations that could come to exist if we avoid an existential catastrophe. Since there is a lot more possible future than there is present, if you count each person in the future equally as a present person, then it will tend to outweigh what’s here right now.

The value of an existential risk is this potentially enormous number when you count all the people who could live on Earth for the next billion years, and then even more when you think of all the people who could live if our descendants colonize the universe.

If we can realize experiences and lives in other substrates, other than biology, if we can learn uploads and so forth, then even more. But even with the most conservative assumptions, even if you just assume human biological bodies living on Earth, no cosmic colonization, even then the numbers are very, very large, just because the Earth can remain habitable for hundreds of millions of years.

Well, yeah. It’s not so much that we need to believe that Homo sapiens in its current form needs to be forever preserved. But it’s more that we have this potential to realize a lot of value, either by continuing to have a lot of humans or by having other kinds of sentient creatures, perhaps, that still embody what we think is of value.

Different theories will disagree about what fundamentally is of value, whether it’s pleasure or whether it’s understanding and knowledge, interaction, relationships, creativity, whatever it is. All of those things could be realized by future civilizations to a far greater degree and over a far longer time span than we currently realize these values.

That’s why, for very many different possible theories of value, there is just more potential value that could be brought into existence in the future than is currently realized here on Earth. Hence the importance, then, of not destroying this potential.

Machine Intelligence

Nick:  In recent couple of years, we’ve been focusing quite heavily on machine intelligence, partly because it seems to raise some significant existential risks down the road, partly also because relatively little attention has been given to this risk. When we are prioritizing what we want to spend our time researching, then one variable that we take into account is, how important is this topic that we could research? But another is, how many other people are there who are already studying it? Because the more people who are already studying it, the smaller the difference that having a few extra minds focus on that topic.

Say, the topic of peace and war and how you can try to avoid international conflict is a very important topic, and many existential risks will be reduced if there is more global cooperation. However, it’s also hard to see how a very small group of people could make a substantial difference to the risk of arms races and wars.

There are some big interests involved in this and so many people already working either on disarmament and peace or on military strength. It’s an area where it would be great to make a change.

But it’s hard to make a change if you’re a small number of people. By contrast, with something like the risks from machine intelligence and the risk for super intelligence, only a relatively small number of people have been thinking about this. There might be some low hanging fruit there, some insights that might make a big difference.

So that’s one of the criteria. We are also looking at other existential risks and we are also looking at things other than existential risks. We tried to get a better understanding of what humanity’s situation is in the world. We’ve been thinking some about the Fermi Paradox, for example, some methodological tools that you need, like observation selection theory, how you can reason about these things.

To some extent, also more near term impacts of technology. And, of course, the opportunities involved in all of this. It’s always worse to remind oneself that although enormous technological powers will create new dangers, including existential risks, they also, of course, make it possible to achieve an enormous amount of good.

So one should bear in mind these opportunities, as well, that are unleashed with technological advance.

Existential Opportunity

Nick:  There is no very good term that means the opposite of risk or catastrophe. Opportunity, kind of, is a little bit like the opposite of risk but not exactly. What’s the opposite of a catastrophe? A boon? A windfall? None of these terms really fit which is interesting. It makes you wonder why it is that there are so many different words for the sudden, unexpected downward movement. Disaster, risk, catastrophe, cataclysm. But nothing corresponding to that in the upwards direction. If it is, maybe because traditionally, human society constitutes a complex system and there are many more ways of destroying it than of suddenly making it vastly better. It’s not obvious whether that’s the case but possibly.

If you think of a healthy human body, there are many ways in which it could suddenly fail drastically. It could get shot, stabbed, poisoned. You stumble from a cliff. You get sick. There are many fewer things that could make a healthy human body suddenly vastly better. It just doesn’t work like that. Maybe it’s similar for human society. There are more things that could suddenly destroy it like some revolution or war than could take an already reasonably well functioning human society and catapult it to a much higher level.

But maybe with some of these new technologies, there are these potential for drastic improvements upwards, as well, in which case it would be useful to have a word for the opposite of catastrophe. Some kinds of human enhancement technologies, for instance. It might be that suddenly you could unleash the potential for living kinds of lives that are just impossible when you have our current biology.

Human Enhancement

Nick:  With regard to human enhancement, it used to be very neglected, up until the late 90s, by academia and the mainstream. Science fiction authors would explore it. People in the transhumanist community would be talking about these things. But in academia, there was very little public discussion about this. That has changed so that now it’s a big part of bio ethics, human enhancement ethics. There are books, seminars, papers, people working on these things. I think that’s some progress. What the field would most need right now is actually human enhancement technologies that work. At the moment, the main thing that is holding us back is not so much that there are legal prohibitions on these things, or ethical constraints.

But just that there isn’t that much human enhancement technology that really would benefit a normal, healthy person. In sport, we have things like steroids, which can increase strength although they may have various health disadvantages as well.

In terms of cognitive enhancement, there’s actually very little there. Like coffee, maybe, it gives you a little bit of extra energy, but then you probably have to sleep more the next day already. Modafiil, some people might get a temporary boost in energy from that, nicotine. These are on the margin and it might be that their side effects and long term disadvantages outweigh the benefits you get.

You can go through the other fields, like mood. Can we enhance mood? There are drugs that can enhance mood temporarily, but it’s probably still not the very wise thing to do for somebody who wants to maximize happiness over a lifetime, to start popping a lot of legal or illegal drugs. There might be particular individuals who benefit, if you have some natural imbalance of some neurotransmitter or something.

I think that now we need maybe the actual technology to catch up with some of these discussions of human enhancement technology. Maybe we’ll see over the coming couple of decades, forms of genetic selection that might produce genuine enhancement.

Maybe there will be some drugs that can improve memory or provide other advantages. It’s just turned out to be more technically challenging than perhaps many people thought back in the 90s, to quickly create medicine that provides real benefits in the real world to ordinarily healthy people.

Genetic Selection

Nick:  It’s proven quite difficult to understand the full complexity of human biochemistry and the way we work down to the molecular level. We have a lot of information there, but often enough the complete picture, there are so many indirect pathways by which one component can affect another. We always have to worry that even if we’ve mapped out one causal pathway, there might be some indirect affect that we’ve not taken into account. It’s interesting, therefore, to figure out whether there are any ways of enhancing the system that doesn’t rely on comprehensive understanding.

One of those ways would be through genetic selection, where you might be able to build up a database where you find out which kinds of alleles correlate with various traits. You don’t necessarily have to understand how the allele caused those traits or how different parts of the genome interact to create health, intelligence, or whatever other trait you’re interested in.

But just by having the genome self sufficient, there are many individuals and the outcomes you can then find these correlations, and you might then use that to select. For example, between some set of embryonic you are running in vitro fertilization or in the future maybe to direct genetic engineering.

This technology is still at its infancy. It is only very recently that sequencing costs are coming down to the point where it is now becoming feasible to run large studies with many individuals and to scan their genomes which you need to do to detect these often very small and weak correlations between particular alleles and trait outcome.

It looks life for intelligence, for instance, that it’s not determined by a few genes, but that there are huge numbers of genes each of which has a very, very, small impact on intelligence. But because there are many of them cumulatively, that accounts for the additive inheritability of intelligence. To select for some trait like that, we really need to know about a lot of correlations individually weak but jointly strong.

That’s the kind of thing that you can in principle do without understanding the full complexity of human biochemistry. My guess would be that perhaps the first type of enhancement that would be really powerful will be these kinds of enhancements that you can do without having to understand exactly how they work.

Somatic vs Germline Gene Therapy

Nick:  Germ line genetic enhancement has disadvantage that it doesn’t work for any of us, because we’ve already grown up so it’s too late. But in terms of thinking about how enhancement technologies might actually have some impact in the world over the next half century or so, then this might be one of the more powerful types of enhancement of the medical type. Then there are external enhancements with the computers and stuff that we use without necessarily having to implant them in ourselves, but just this external infrastructure.

I think that’s one reason why it’s in a sense less desirable if you could have the same kind of enhancement either through germ line or somatic gene therapy. It would be nicer if you could have it through somatic, because then everybody could benefit from it.

Also, with somatic gene therapy, you have to ability to ask for consent from the enhanced person. An adult could make their own choice whether they want this enhancement or that, or no enhancement at all. That reduces a lot of ethical complications.

In the case of the germ line, you have to make decisions for some person that doesn’t exist yet, so maybe parents would have to select which new person to bring into existence.

There are additional ethical complications in that that would be nice to be able to avoid. If this germ line genetic enhancement technology starts kicking in, then there would presumably be an initial version of the technology that will be fairly weak. Then it will improve in its power over time, so successive cohorts might then be increasingly enhanced.

You might have a more rapid turnover than you have now. Eventually, we all get surpassed by our children because we set off age, become senile, and decrepit, and then we die.

In a regime where you have new waves of genetic enhancement coming on line every few years or every 5 or 10 years, then you could have a faster displacement that the smartest people might always be quite young. That would be a tradeoff there.

You still need some parent to grow up, mature, and learn a field before you can start to contribute. But maybe the strongest contributors will be in their early 20s. Then once you are beyond 40, it’s not so much that you have degenerated. You might still be able to do as good work as you ever did, but it’s just by now there might be a new cohort of more highly enhanced kids that have reach 20, that have 20 years more advanced technology.

That might be a big deal once this ball starts rolling.

Machine vs Genetic Enhancement

Nick:  I think that with machine intelligence there is a greater potential for a really explosive takeoff. Once you reach machine intelligence that roughly match humans in general intelligence as opposed to just domain specific confidence, then I think it’s a fairly good guess that soon thereafter, where soon might mean hours, weeks, or a few years, that you will have super intelligence. A machine intelligence takeoff could be very rapid. With genetic enhancement there is a kind of intrinsic delay because a human being still has to grow up and we have a generation span of 20 years or so, so it modulates the impact. Also, there are more fundamental limits to what you can do with a biological brain. It still runs on the same kind of neurons with the same principles that our brains do.

That impulses speed limitations and size limitations and other kinds of limits that are not necessarily present for a machine intelligence. So there’s a greater potential for explosivity once you get to machine intelligence enhancement.

In the medium term, there are concerns about how these different automation technologies might affect that socio political landscape for humans. Will there be technological unemployment, for example. Some economists are thinking that we might already see the early stages of that now. In that there have been, seemingly, in a wide range of countries, increasing wage gaps between educated workers and uneducated workers.

One possible explanation for this is that automation has made it possible to more easily replace unskilled workers in assembly in factories. You can have a little robot that puts the pieces together. In agriculture you have big machines. You don’t need so many individuals plucking the fruits. In office automation, you don’t need a very low skill secretary who just types out things anymore. You have word processors. The boss can type his own thing. It’s possible that some of those trends might continue.

There are alternative explanations that have been forward to this. One is outsourcing, that a lot of the low skilled labored is now performed in low cost countries. We see the wages of low skilled workers declining in the developed world. But it might be a combined effect from automation and the new technologies for outsourcing. These are issues that have not been fully resolved. What is the cause of the current increase in wage gap?

We don’t know whether those will continue to unroll over the next few decades. But in the limiting case, once we have general machine intelligence, then a much wider range of human work becomes irrelevant. That you could have machines that can outperform in every cognitive domain. At that stage, the only kinds of jobs for which humans would still be competitive would be those where the customers have a particular preference that the job be carried out by humans.

Right now, a lot of people will pay extra money if some good has been made by hand. A handmade little wooden doll might command a price premium over a machine made doll, even if the actual object is the same because people might care, for whatever reason, about how it was produced or if it were produced by indigenous people, or if the workers were treated ethically.

There might be all these basic preferences we have, in certain circumstances, regarding the causal processes that produce the product we’re buying. So in those areas, including some service areas, where we might just prefer the service to be provided by a human being, it might be that humans could remain competitive, even after machine intelligence can outperform on all objective metrics.

The Most Likely AI Scenarios

Nick:  In terms of what is most likely, there are a range of scenarios that each have some claim to probability here. On the one hand you have these rapid takeoff artificial general intelligence scenarios where you might have one entity that achieves super intelligence so quickly that, for a period of time, it’s the only super intelligence around. It might thereby achieve a very powerful position, such that it is able to form a single [inaudible 23:33] . It is able to shape the future, without having to worry about some competing agents. At the highest level of decision making, there is one decision making process. There is a very different class of scenarios where you have a multi polar outcome.

Maybe in some scenarios involving whole brain emulation as the first type of machine intelligence to match human intelligence, modeled closely on human brains. In some of those scenarios, you might have a more gradual takeoff in which case it’s more likely that there will be many entities undergoing the takeoff simultaneously.

Therefore there might not be any time where any one of these developers has a decisive strategic advantage where it can just dictate conditions. It’s very much not obvious at all which of these two classes of scenario is more desirable. Namely one might think it’s dangerous if one entity has so much power that it can dictate the future. Let’s hope that it’s a more pluralistic takeoff scenario.

But there are distinct kinds of failure modes that arise when you have many competing agents. You have something like evolutionary competitive forces coming into play when you have many different entities that are competing. There’s no law of nature that says that evolution always has to lead to desirable outcomes. Or even to more complex and interesting outcomes.

It might be that the long run equilibrium of some free for all competition between different cognitive processes, like a cognitive soup of different modules that get resolved and compete for resources. It might be that although there would be a lot of productive capability in such a world, it might erode away the kinds of complex cognitive structures that we associate with consciousness.

You might have all these more primitive form of complex processes that trade with one another and outsource cognitive functionality into this cognitive soup. It’s not at all obvious that the net result of that will be a world that we would place much value on. Even before that there is the possibility of, if you have this kind of ecology of uploads that are competing for resources…

That you will very quickly [inaudible 26:12] situation where the average wage has dropped to subsistence level, because it’s very easy to produce more labor. If labor is software, you can just copy it and make more. So the pool of labor expands into the wages that each upload can earn, equal to cost of earning an upload. Like paying whatever copyright fees you need to pay and electricity and hardware.

In which scenario, all humans can no longer earn an income, if they are competing directly with uploads, perhaps. And possibly worse, all these uploads that exist might then be living at substance level. They might be working all day long. Any minute they take off for leisure that is not directly increasing their productivity might just mean that they would be outcompeted in the next round of selections.

You might have this drift down to the lowest common denominator. That could be a dystopian future. We don’t know for sure whether that’s what would happen in this multi polar outmode, but it seems like a live possibility. It might be very difficult to avoid that. If that’s what the fitness landscape looks like for these kinds of upload ecologies then it might be very difficult to avoid that.

Unless that we have this unipolar outcome, where you have one single form that is able to change the fitness function. On the other hand, of course, the unipolar outcome has risks of its own. I think that the first step to making progress on these issues is to just realize that they are very difficult. It’s not at all obvious what the correct answer is.

Singleton Outcome

Nick:  With a singleton, you have a wide range of possible outcomes. Because basically, once you have a singleton, the outcome will be whatever the singleton prefers. The singleton has the ability to shape the future according to its values. That means the future will be shaped according to what the singleton wants. So depending on the values of the singleton, you could get anything. Extremely good outcomes that avoid all these dystopian competitive equilibria and you could also get completely value less outcomes. If you have a singleton that hasn’t acquired human values, in some sense. You could have a singleton whose only goal is to calculate more and more digits of the decimal expansion of pi which is a completely meaningless goal, by human standards.

But there is no logical contradiction between having a vast amount of intelligence and having a very simple goal, like just calculating the digits of pi. One of the risks with this kind of rapid takeoff artificial intelligence scenarios is that we will fail to load human values into it. We will end up with a super intelligence singleton that has some humanly meaningless goal.

Like making as many paper clips as possible, or calculating digits of pi or something like that. Then the whole future and, in fact, the entire cosmic endowment, like all the resources in the universe that we otherwise might one day have used for some beneficial purpose like building vast, flourishing civilizations with extremely happy people living long lives under unbelievably wonderful circumstances.

All that might then be used for just making more and more paper clips. Or making more and more of some other, by our lives, completely worthless thing. It looks like a very difficult problem. Let me correct that. It looks like a potentially very difficult problem, to figure out how to reliably load in values into a kind of seed AI. In order that that value then remains stable and shapes the future of the subsequent super intelligence in the way that we intended.

It’s at least as yet an unsolved problem, how to do that. So we have, in fact, with regard to artificial intelligence, two problems. We have, first, the problem of actually building artificial intelligence that reaches human intelligence and super intelligence. Like this huge and difficult technical problem. Then we have this other problem of how to make sure that the super intelligence would be safe.

Like the control problem. How could you set up the initial conditions, so that the outcome of an intelligence explosion would be something beneficial. So the technical problem of how to build AI and the control problem. Both of these will one day need to be solved. But it’s really important that we get the solution to the control problem before we find the solution to the technical problem of how to build AI.

Both of these problems look difficult. We don’t know how long either of them will take to solve. What I’m thinking is that we need to start working hard on this control problem, so that we hopefully will have a solution available to that problem by the deadline, which is defined by whatever time some people find the solution to the technical problem of how to build an AI.

I think that’s exactly right. Although some people have made very precise predictions about how far away we are from AI, different people make different predictions. The truth is that nobody knows. It’s just very difficult to predict, with any reliability at all, to predict how long it will take to develop some radical new technology like AI.

It seems that what we need is not just more growing. You can’t guarantee the achievement of this goal just by throwing more manpower or more money at the problem. There is one or more fundamental new insights that will probably be needed to create artificial in general intelligence, and we don’t know how hard those insights will be to get or exactly how many more insights are needed.

Therefore, I think the bottom line is that we should think in terms of probability distributions rather than point predictions, and these probability distributions should be smeared out over a wide range of possible arrival dates. It could happen in 10 years. It could happen in 50 years or 100 years, 200 years. We should accord some significant chance to each of those timeframes.

Solving the Control Problem

Nick:  We’re still at the very early stage here where there are a number of different types of research questions that could turn out to be relevant to solving the controlled problem. I think that at this early stage, we should pursue several different avenues. One class of approaches has to do with figuring out the way by which we can build AIs that can learn values. Depending on some technicalities, there are different methods of approaching that. You could either try to give the AI initially some indirect or crude representation of the value, and then the learning process consists of following that indirect process and or fleshing out those crude criteria.

Another form of value learning would be something more along the lines of what humans do, where we can accrete values over a lifetime depending on what experiences we have. But there, one would have to be very careful to set up the mechanism in such a way that it would actually accrete the same kinds of values that the human would accrete under similar circumstances.

It might be that our mechanism includes a lot of genetically complicated machinery that makes us generalize from experiences in certain ways, and if you had an AI that wasn’t very similar to a human mind, maybe it would generalize in different ways.

There would then be this risk that it might look like it was generalizing in the same way that we did when you tested it while it was still subhuman in intelligence, or human level in intelligence. But once it reached super intelligence, it might then be revealed that there were some differences in the kinds of values it had acquired from the values that the human would have acquired, which could lead to catastrophe.

Another approach is what I call the scaffolding approach, where you would have some preliminary values while the AI is weak. You bring it up to roughly human level, where it can accumulate and learn human concepts in general, like we humans do, and then freeze it and try to install a new goal system at that stage using these complex representations that it has acquired that has some pros and cons.

One possible disadvantage is to risk that it will shoot past the mark and become too intelligent before we manage to install a new goal system, and there are some other.

The bottom line here is that there are several different approaches that we are just beginning to explore at this stage. It’s way too early to place any confident bet in which of these approaches to the control problem will ultimately prove most promising.

Metaphors and Plot Devices

Nick:  Metaphors. It’s a risky business, in as much as a metaphor will usually have some properties that reflect what you really want to say, but then the metaphor might also have other features that don’t actually match what you’re talking about. If you present the metaphor, people might use the features you wanted of the metaphor, or they might use something different. I guess the very concept of the term intelligence explosion contains a kind of metaphor of an explosion, like something that happens very suddenly and that is potentially dangerous.

I think that might be one of the reasons I like the term intelligence expulsion more than the term singularity, because it does have this connotation. Then maybe the secondary thought then will be that an explosion is potentially dangerous, but if you have a controlled detonation you might be able to direct this power in some useful direction.

For most people, I think that a lot of their views about AI in particular, and existential risks of other kinds in general are shaped to an unfortunately high degree by what they see in science fiction novels and in movies made by Hollywood. This medium of science fiction has been very useful in one sense, of keeping these areas of thought alive for many decades before it became possible to study them in academia. It opened people’s minds.

There is also this good story bias that filter which kinds of stories we are exposed to. That the scenarios that you will see in a Hollywood movie or a science fiction novel are all ones which made for a good story, an interesting story that’s fun to watch or read about.

That usually means it’s got to have protagonists that are recognizably humans, that have emotions, desires, that face some big challenge. That they have to interact with other human like characters, and that the protagonists usually have some pivotal role to play in what happens. That there is a set of ups and downs as opposed to, for instance, the story where everything ticks along exactly as we are used to and then suddenly everybody goes extinct and nothing arises to replace us.

It’s a really boring story. You couldn’t really have a Hollywood movie where everybody goes extinct in the first five minutes. Then there’s nothing there. You can see plants growing. It might be that such a story is much more likely than the story where some human protagonists fight off a robot army using machine guns, where you have the muscular human person and the nerdy human person and the empathetic human person forming a team underground, to get around…

Those kinds of stories are much more interesting, but much less probable. It’s worth reflecting on how much of our intuitive expectations of what seems worth taking seriously is just an artifact of this good story bias. Then try to remove that. The more boring a scenario is, the more we should probably upgrade its probability, to compensate for this.


Nick:  It would be really great if we could raise our level of wisdom and rationality and also find better ways to coordinate and collaborate internationally. If we had the coordination and the wisdom, then our chances would be vastly greater. That a lot of problems arise either from the fact that the world is splintered into countries that might enter into arms races with one another. Or technology races, where even if it would be better for everybody if the technology would be discovered later, each nation might think that if it’s going to be discovered it might as well be we who discover it. Because it gives us power. So that leads to race situations. And, if we were also wiser.

A lot of problems arise just from limited foresight, limited ability to think constructively about topics like synthetic biology and nanotechnology. Not to mention artificial intelligence, which is just something that our political leadership and even out intellectual opinion formers are very ill equipped to think seriously about. It requires, I think, a high level of intellectual…

You need to care about getting things right. Even then, it’s quite difficult to do it. But if it’s in an arena where the discourse is shaped by a lot of other things than the quest for truth, if people use the arena for political purposes or for self promotion purposes, or to tell interesting stories, to make money…If there are all these other roles that are statements play, then the truth is a very weak signal and it will be drowned out by this noise and distortion.

So better cooperation and better wisdom would be two general purpose resources that would just greatly enhance our prospects for a good future in general terms. Now both of those are hard. These are variables that, although it seems very clear that the sign of these is positive…It would be really great if we had more peace and cooperation.

It would be really great if we had more wisdom and understanding. These are things that are hard to influence much, for a small group. What you might then look for, either high leverage points where you could influence like if you could seed some rationality culture or design a new institution that would make people have wiser technology anticipations.

Prediction markets is one of these. I don’t know whether it will actually be the one that works, but it’s an idea that if it really gives people a lot of other benefits, in solving short term prediction problems might, if it became widely accepted, force people to think more in terms of probabilities and dis incentivize people for just telling compelling stories.

Or you could try to zoom in more narrowly on some specific existential risks or particular technologies. You can work on AI safety. You can work on genetic enhancement of humans. You can work on some other particular methodology.

Ways to Help

Nick:  If someone is actually interested in doing as much good as possible, there are a variety of different paths one could take. One would just be to try to make a lot of money and then donate it to the right causes which, for many people, might be the most cost effective way of contributing. Some people might contribute directly through their work. For instance, in academia there is a lot of important research questions that one could work on. It’s not obvious what the most important discipline to study. I think that will depend maybe on the person’s talents and natural interests. If one looks at the people that work on these things right now, there are a number of different disciplines that have produced useful contributors.

Some significant number come from philosophy, some significant number from computer science, some from mathematics. Those might be the three most common fields. But also some from economics and neuroscience and physics and there might be other fields that have the potential to produce really useful contributors.

Another common thing, if one just looks at the people who are involved in this. Often, even if they have a degree in one field, they know more. They have wide interests. So that’s something that one could do. Then a whole host of other people. Journalists would be potential spread awareness of these things. Funding agencies and people who work for those will have other opportunities for influence    political leadership, opinion leaders.

There’s a whole swath of different ways that you could contribute. Which way to contribute will depend on what your circumstances are, your talents. But for some people, what seems like the most boring way might actually be the best way which is just to take advantage of the principle of division of labor.

Earn money in whatever skills you have, and then donate some fraction of that to other people who are specializing in doing the kind of work that needs to be done.


Nick:  I think that Transhumanism, especially during the 90s, played a very important role in creating a forum where these ideas could be explored. This was in conferences and Internet mailing lists. A lot of these advanced issues about the long term impact of technology, with nanotechnology, space colonization, AI, human enhancement work, at least in that point in time, almost exclusively discussed in this transhumanist forum. Certainly the discussion was more advanced there than anywhere else. That’s a great contribution that transhumanism has made to the world. It still has the potential to continue that awareness spreading role. If I were to make some criticism, I think there has been a tendency among some transhumanists to feel that it was their obligation to be the cheerleaders of technological change and to defend any and all forms of technological change against any and all forms of objections and criticism.

I think that’s unnecessary. It’s just a burden that we can lift off of our shoulder and drop. There is no reason to feel that one has to defend all forms of technological change. Just stop doing it and you will feel relieved that you no longer have to try to make that argument.

You can still emphasize the great potential, if technology is developed and used widely and fairly. There just is enormous space of possible modes of being, that human life can become something far greater than what we know, and that ultimately, technology is needed to realize that potential. That’s the core of the transhumanist vision.

But that’s consistent with their being huge risks and there being the possibility of technology being used for huge evil. It being the case that comes technologies often don’t work very well and that you should be skeptical of just popping a lot of pills and hoping that you will derive great benefits from that. It’s consistent to have this ultimately very optimistic view about what can happen if everything goes right.

That technology has a big part in that. At the same time, being skeptical about what technology can do now and about the risks we will confront in actually realizing this immense goal. But keeping that vision alive of what’s ultimately possible is an important component here, among all this talk of risk and downside.

World Transhumanism Association

Nick:  Back in 1998, I founded the World Transhumanist Association, with David Pierce. There were a couple of reasons. One was to create a platform that could cater to a wide range of different forms of transhumanism because some of the basic transhumanist ideas can be combined with political views from left and right and the middle and apolitical views. It seemed useful to have a more broad ranging form of transhumanism than [inaudible 48:46] , which, I think, that existed before that. Another was to try to bring into the mainstream, in particular the academic mainstream, some of these issues about human enhancement and enhancement ethics, and just try to encourage a wider public discussion about these things and awareness that the human condition as we know it is not an eternally given constant.

It’s not a fixed parameter. It’s something that probably will change over the coming decades. You need to really take that into account if you want to have some serious view about where we should be going, what we should be hoping for in the world.

At least in those two respects, the World Transhumanist Association has actually been a success. It is now a case that in bio ethics, for instance, there are books, seminars, papers published all the time on human enhancement ethics. It has entered the mainstream. I think that if you asked many educated people or intellectual leaders about their vision for the future…

A greater fraction of them, at least, would take into account the possibility of fundamental technological changes to human nature than was the case 10 years ago. I still think there is a lot farther to go in that respect, obviously. But insofar as transhumanism can continue to serve that role of raising awareness, bringing these topics into the center of attention and getting people to widen their horizon, and thinking of the human condition not as an eternal given but something that perhaps can change and will change.

That’s a very important value that transhumanism brings to the world, in addition to being an actual social community for the people who are actively engaged themselves.


Adam A. Ford served on the board of Humanity+ in 2012-2013 and is founder and president of H+ Australia. Adam has held multiple conferences in Australia and abroad. He is also active in think tank oriented discussion groups, and organizes conferences around the future of science and technology aimed at shaping the likelihood of a favorable future for humanity.

For more videos of lectures and interviews with thought leaders please
Subscribe to Adam Ford’s YouTube Channel


  1. I have read this article and loved it. I need to read it again because it has some important stuff in it. i have to say that this is one of those articles that was so good that it shut me up as to a loss of words as to how to respond to it. Perhaps because there was so much, perhaps because I have to twist my brain to be able to respond back in the same way as this was written.
    good stuff here
    But there is one thing in short that I have to say and that is simply that when one has experienced this and can see the way that it can impact our future generations, it is not really possible to ethically forget it I do not use the concept of ethics lightly for I usually tryi to stay away from that, but, when one has children and has worked with children and has seen how the AI can impact them and their futures as well as their children’s further it has to be given consideration even when one would like to just let it fall by the way side due to the battles that could ensue from standing up against the hurd

Leave a Reply