The Smarter-Than-Human AIs Won’t Rule the World but More-Than-Humans Will
A conversation with More Than Human author Ramez Naam
From implants and brain-computer communication to genetic therapies to brain enhancement, Ramez Naam’s 2005 book, More Than Human: Embracing the Promise of Biological Enhancement was one of the most lucid and accessible tours of the then-latest developments in transhumanist related sciences and technologies. I interviewed him after the publication of that book for the (deceased) NeoFiles website and much of what he said to me has remained with me since. It seemed like it was time to catch up. What’s his perspective on more than humanism seven years hence? Read on.
H+: Your particular areas of interest in More Than Human seemed to be our ability to alter our minds chemically and genetically, in extending lifespan, and in brain-computer interfaces. Have you noted much progress since the publication of that book in 2005? Any specific developments you’d like to point out?
Ramez Naam: There’s been a fair bit of progress in the last 10 years. In the brain-drugs space we’ve seen modafinil pick up popularity and now ardafinil has joined it. Effective cognitive enhancers are still likely to come out of research on combating Alzheimer’s and general age associated memory impairment.
In brain-computer interfaces we’re now within a few years of approval of artificial retinas, and you see them now being referred to as “bionic eyes.”
With longevity, it will be a while until we know for sure that we have an effective longevity treatment in humans. There is a lot of attention to sirtuin-targetting drugs such as resveratrol and synthetic resveratrol mimetics. In 2008, GlaxoSmithKline paid $720 million to buy Sirtris Pharmaceuticals, which is developing potential age-slowing compounds that could also treat diabetes and other diseases associated with aging. Big Pharma sees some potential here, which should tell us something.
The most important developments, though, are really on the underlying infrastructure, if you will. When I wrote More Than Human, a full human genome sequence cost millions of dollars. Now it’s down to about $5,000. The cost has come down from $3 Billion twenty years ago to $5,000 today. By 2020, we should be at around $10 or so for human genome sequencing. That’s vital because that is going to lead to a massive explosion of genomic data; about humans, about animals, about plants and microbes, about tumors and other abnormal decisions. And that is going to continue to accelerate research on all these fronts.
H+: When I interviewed you some years back, you said something about transhumanism that I always quote. You said (paraphrasing) that anybody who wears glasses or takes a birth control pill is a transhumanist. Since then, the term has gained more currency and you can find people all over the net who feel very threatened by technological enhancements (but not glasses or birth control). Why do you think transhumanism provokes fear and outrage in many?
Also, could you say something equally quotable about The Singularity for my future use?
RN: You have a good memory. These days I think that anyone that has an iPhone or an Android phone is a transhumanist. And I don’t think that technological enhancements in general provoke fear and outrage. Mostly it’s just the ones that trigger the biological “gross” response, or those that touch on the politics of reproduction and abortion.
It’s been hypothesized that we have an evolved “gross” response or aversion to certain things because they resemble or actually are potential disease vectors. Human excrement is a major potential disease vector. So are blood and guts, particularly of other humans, but also of animals. The first “eww gross” reaction I know of to a biological technology was to the smallpox vaccine, which was produced by taking material from pox-ridden cows. You can see how, for most of human history, taking material from a diseased animal into our own body would have been a very bad idea — it’s a potential way to get an infection and die. Yet when done very carefully in this very particular way, it saved lives.
The same sort of thing happened in reaction to blood transfusions, and for reasons that make sense. Coming into contact with another human’s blood, in the wild, for most of human history, was a great way to get sick. And so we have a built in aversion to such things.
What happens to combat this is that these technologies get tested, we see that they’re safe, we meet people or hear the stories of people who’ve used them, the technology providers sanitize the image a bit, and we end up in situations where we have to make a choice between employing such a technology or suffering death or debilitation. As people make the logically right choice, the technology becomes less foreign and alien, and that, by itself, quells fears.
So for the most part I don’t worry about the fear reactions. It’s easy to fear or be outraged by something that seems gross and is far in the future. When it’s a question in the present, when you’re weighing recovery from illness against those qualms, people tend to turn around. The best cure for anti-transhumanist sentiment is actually providing value with these technologies to people who need it.
As for The Singularity… The Singularity happened about 100,000 years ago, I’d say. The point at which humans developed symbolic reasoning and the ability to express abstract thoughts through language, everything change. That’s when we went over an event horizon. Homo Erectus, no matter how long they thought about it, couldn’t imagine the world we live in today. The same is not true of the future for us. We have science. We understand some laws of the universe (or perhaps just very close approximations). We have math. We have a concept of the flow of time. We even have a loose handle on economics and evolutionary theory. We have science fiction novels where the main characters are uploads. We have science fiction novels where the main characters make millions of copies of themselves. We can understand what’s possible in the future in a way that Homo Erectus never could have understood us now. So if the Singularity is supposed to reflect an event horizon past which we can’t see…. I’d say we passed that point a long time ago. We have far better optics into the future (or at least the various possible futures) than our ancestor species did into our time.
H+: We face a number of crisis points right now. Environmental crises seem to be particularly in our faces. What are some of your thoughts about climate change, our oil habit and so forth?
RN: Well, first I think it’s healthy to realize that these are real challenges. It’s easy to get wrapped up in the tremendous progress we’re making and label any concern brought up about the environment or about potentially finite resources like fossil fuels as doomsaying. But in both the case of climate change and in the case of fossil fuels, there’s strong data that indicates actual problems. We’re on track for a six degree Celsius temperature rise in this century – twelve degrees Fahrenheit – and that’s enough to potentially kill off the bulk of the planet’s coral reefs, which are the origin of about half the ocean’s biodiversity.
That’s not to mention the possibility of runaway global warming through feedback loops. As sub-polar tundra thaws, it’s releasing methane into the atmosphere. Sufficient warming will turn massive forests like the Amazon from being carbon sinks that take CO2 out of the atmosphere into being carbon sources that emit CO2. The most concerning thing is that we have evidence of extremely abrupt climate shifts in the past, probably fueled by these feedback loops. For example, about 13,000 years ago at the end of the Younger Dryas there’s evidence of about 25 degrees Fahrenheit of warming happening in just a few decades. Arguably the biggest risk of climate change is not a gradual change over this century, but the increased risk of very abrupt climate change as has happened on our planet in the past.
With all of that said, I think the flipside that environmentalists often ignore is that our technological capabilities — particularly in biotech — are rapidly evolving. It is going to take a fair bit of innovation to develop new ways to fuel the ever-increasing demand for energy — in electrical generation, in transportation, and more — while also guarding against or even reversing the buildup of CO2 in the atmosphere.
A large chunk of that innovation, I’d argue, is going to come from biotech. As biotech becomes more and more an information technology and we continue the exponential pace increase of gene sequencing and gene printing, we’re going to gain not just tremendous insights into the human mind and body, but into all of nature. Craig Venter, who led the private sector side of the effort to sequence the genome, is already working on this, doing “whole ecosystem sequencing” where the genes of all the organisms in a sample of seawater, for instance, are chopped up and shotgun sequenced. And his institute is working specifically on energy —trying to engineer new organisms to efficiently take sunlight and water and produce hydrogen that can be used for energy.
Underlying this — there are only a few million species on earth. A number of them have unique capabilities in terms of the environments they can survive in, their ability to synthesize useful compounds, their resistance to disease or drought, their ability to sequester carbon, etc… On current pace, by 2020 we’ll be able to sequence the genome of literally every species we know of on the planet for maybe $30 million, which is a drop in the bucket. That’s going to unearth a tremendous amount of knowledge.
Freeman Dyson wrote an article a few years ago where he envisioned genetically engineered trees taking carbon out of the air as part of a solution to global warming. He was roundly criticized as being a global warming denier. I don’t agree with everything he holds, but his vision of biotech as part of the solution to climate is spot on. Indeed, the CO2 we’re pumping into the atmosphere and the oceans is a resource, and now there are researchers at Los Alamos National Lab, at Columbia University, at private companies and elsewhere all working on ways to capture the CO2 from the atmosphere and turn it into fuel.
Obviously it would be safer all around if we simply ceased or drastically lowered our emissions of CO2 and other greenhouse gases, but in the absence of that, or even in the context of economic and political incentives for that, we’re going to see the development of technologies based on our increased understanding of biology that seek to capture and re-use this available carbon.
For similar reasons, while it’s clear that there really is a finite amount of fossil fuels on the planet, I don’t see us running out. The sun hits the earth with the energy equivalent to about 70 Billion barrels of oil every hour of every day. That’s roughly the same amount of energy the entire planet uses each year. The problem is not limited energy. Its limited ability to harness the energy that’s in abundant supply. That’s an area where I expect huge investment and huge innovation in the coming decades.
H+: Do you think the current economic system and current economic models can take on the future — even the near future? Do you foresee a big disruption in that area?
RN: The market-based system is an amazing construct. It operates in a lot of ways like a neural network, in the biological or computer science sense, with flows of money and price signals and product availability serving as the impulses that flow through this large information processing system. It also operates like a Darwinian ecosystem, with heavy competition between memetic entities (corporations, products, business models, etc.) that select for the ones that are most able to attract resources from resource holders.
The market is brilliant. It works better than any economic system we’ve ever seen. It’s also deeply imperfect. And in many ways it’s like an idiot savant.
There’s a saying in computer science of “garbage in, garbage out.” What that means is that if you feed bogus or incorrect data to a computer program, don’t expect to get accurate or meaningful results out. Well, this great information processor we have called the global market has that problem. Climate is an obvious area. There is no price for CO2. No one owns the planet’s atmosphere. And so the market places a value on things that produce values it knows how to measure — like electricity or transportation — but doesn’t take away any value for their degradation of this giant shared resource we have. The result is that without some additional framework placed around it, it would run us headlong into quite dramatic climate change. A similar situation exists with water, with global fish populations, and any other tragedy of the commons.
So, for the market to really work on a global scale, we need some ecological awareness built into it. These things that no one owns and that we simultaneously depends on — somehow degrading them has to come at a cost, and conversely improving them is something we should pay for.
That’s one major area where the current system is failing us. Beyond that, I don’t worry too much about the recent global financial issues. Maybe we need more effective regulation. Maybe we need fewer bailouts and more pain for corporations that take unwise risks. Either way, the financial system’s seizure of last year and the resulting recession are, in the long run, blips. They change very little about the underlying rate of progress. Even the Great Depression, when you look back on it from a distance, was only a large blip. It knocked out about a decade of growth out of the 20th century, but the decade that followed showed faster than normal growth. If you plot GDP per capita over the 20th century you see an almost straight line with a disruption from 1929 – 1949 or so. But when the dust settled and we came out of the rollercoaster, the economy was right about where you would have projected based on trends in the mid 1920s, and continuing to move forward at the same slope.
None of this is to say that we don’t need to continue to improve the system. We probably do need changes to regulations of the financial industry, and we should send the market a clear message by allowing companies that screw up to fail, and ensuring that their shareholders lose that money — that’s how people learn. But fundamentally the system works well, and with some tweaks and with efforts to make the market aware of externalities like climate, I expect it to keep working well.
H+: You’re speaking at the upcoming Singularity Summit. Do you share the Singularitarian enthusiasm for robotics and AI and do you foresee smarter-than-human AIs in your future?
RN: I think it’s quite likely that we’ll eventually develop AIs with more than human intelligence, either through uploading or through other approaches to AI work. I don’t think there’s any serious theoretical or philosophical debate at this point as to whether cognition arises from a physical substrate. So it’s now a matter of time and effort before we get to the point of being able to cause it to arise ourselves, in the ways that we want to.
We see a lot of predictions as to how close or far away either uploading or human-designed AI is. I’m suspicious of all of those. While there’s no theoretical or philosophical impediment, there are a lot of practical unknowns. I spent much of the last 6 years working on machine learning algorithms that ran across very large deployments of hardware – hundreds of terabytes of RAM, 10s of thousands of CPU cores. Those systems are incredibly more capable than human minds in narrow and specialized ways and incredibly less capable in all other ways. It’s going to take advances not just in hardware but also in computer science concepts in order to realize a human-designed AI. As for uploading, I think it’s almost certain to happen — the brain provides us with a design we can emulate — but the true amount of computation we need to throw at the problem, and the true resolution at which we need to simulate a brain, are both unknown.
Even with all those caveats, I have no doubt that we’ll eventually have smarter-than-human entities running in software.
One thing that interests me is that, to a certain extent, we already have smarter-than-human entities, at least in particular ways. If you think about how Intel designs the next versions of its microprocessors, for example — that’s effectively the output of a large collective intelligence. That intelligence is the sum of perhaps thousands of people and incredible amounts of computing power. So for an AI to go into this rapid feedback cycle where it can continually improve itself and do so at a faster and faster rate… it would need to be not just smarter than an individual human, but smarter than the collective intelligence of the team that designs the next generation chips plus the team that designed the AI’s software.
Intelligence is a social phenomenon as much as it is an individual phenomenon. We think of ourselves as individuals, but we’re really part of these virtual hive intelligences. I say “virtual” because we all belong to more than one. And indeed, the size of hives we can form, the efficiency with which we do so, and the number that we can join have all risen tremendously through innovations in communication technology, from clay tables through the internet.
I mention this because I think it’s easy to overestimate the runaway characteristics or world-changing potential of a single smarter-than-human AI. That AI won’t rule the world any more than Albert Einstein did. Maybe it’ll get a job at Intel or at Google, and join a team comprised of thousands or tens of thousands of people and AIs working to improve some aspect of technology. It’s more likely to add to collective intelligence than to rapidly spiral into godlike intelligence itself.
H+: If you had a smarter than human AI right now, what would you ask it?
RN: I’d ask it for proof that it was an AI and proof of its intelligence. And then maybe I’d ask it if it wanted a beer.
Ramez Naam is presenting at the upcoming Singularity Summit in San Francisco on August 14-15.