With his long flowing beard and optimistic predictions about engineering an end to death, Aubrey de Grey has become a legend within the longevity community. He is currently the Chief Science Officer of SENS Foundation, a US-based charity focused on applying regenerative medicine to the problem of aging. His most recent book is Ending Aging: The Rejuvenation Breakthroughs that could End Human Aging in our Lifetimes.
h+: Aubrey, can you tell us a little bit about your work with SENS, for unfamiliar readers?
AUBREY de GREY: The general concept, which for the past nine years I’ve pioneered and promoted under the name "Strategies for Engineered Negligible Senescence" or SENS is, in my view, our best bet for seriously — maybe even indefinitely — postponing the ill health of old age for people who are already alive today. I have broken down the problem of "preventative maintenance for the human body" into seven major sub-problems, many of which are well on the way to being overcome with contemporary biomedical technology and the remainder of which are, in my view, probably less than ten years away from proof of concept in laboratory mammals and less than 25 years away from clinical application.
h+: What is your general position on the Singularity idea, as described by Vernor Vinge?
AUBREY De GREY: I can’t see how the "event horizon" definition of the Singularity can occur other than by the creation of fully autonomous recursively self-improving digital computer systems. Without such systems, human intelligence seems to me to be an intrinsic component of the recursive self-improvement of technology in general, and limits (drastically!) how fast that improvement can be. So, how likely are such systems? I’m actually not at all convinced they are even possible, in the very strong sense that would be required. Sure, it’s easy to write self-modifying code, but only as a teeny tiny component of a program, the rest of which is non-modified. I think it may simply turn out to be mathematically impossible to create digital systems that are sufficiently globally self-modifying to do the "event horizon" job. And I confess that I rather hope that’s true, because I am virtually certain that the "invariants" that SIAI and others are interested in defining, that will keep such systems forever "friendly" if they can be created at all, don’t exist.
h+: What is your general position on the Singularity idea, as described by Ray Kurzweil?
AdG: I think the general concept of accelerating change is pretty much unassailable, but there are two features of it that in my view limit its predictive power. The first, which Kurzweil has acknowledged, is that one needs to be able to evaluate the informational complexity of a problem in order to get a number for how soon it will be solved. It’s highly questionable, in my view, whether we can estimate the complexity of human thought from the complexity of those very simple parts of the brain that we understand reasonably well, which is what Ray has tried to do. The second problem, which I haven’t seen Ray address, is the extent to which the need for new approaches slows the process. Ray acknowledges that individual technologies exhibit a sigmoidal trajectory, eventually departing from accelerating change, but he rightly points out that when we want more progress we find a new way to do it and the long-term curve remains exponential. What he doesn’t mention is that the exponent over the long term is different from the short-term exponents. How much different is a key question, and it depends on how often new approaches are needed — which as far as I can see is not at all easy to predict.
h+: In the past five years or so, talk about the "Singularity" has become much more mainstream and acceptable, much like talk about radical life extension. Do you think that looking at futurism through the frame of the multi-faceted "Singularity" idea is helpful, or just makes matters more complicated?
AdG: I think it’s helpful. People have quite extraordinary difficulty thinking about non-linear change, and the general concept of the Singularity, especially the Kurzweil version… but really all versions, is a nicely canonical example that is as good as any to use to educate people in such thinking, even if that education consists mainly in simple repetition. In the case of life extension, the concept of "longevity escape velocity" (the rate at which rejuvenation technologies need to be improved in order to stave off age-related ill-health indefinitely) is similar to the Singularity (though subtly different) — indeed, someone recently gave it the rather neat name "Methuselarity", and I have been mystified at the difficulty I have in explaining it to people — they find it much more "ridiculous" than the near-term goal of adding 30 years of healthy life, even though in reality it’s far LESS speculative.
h+: You mentioned that self-improving AI would be the only way to get a Vingean event horizon Singularity. What are your thoughts about the prospects of substantial human intelligence enhancement? Would you consider that technological achievement more or less difficult and costly, than, say, extending the average human lifespan to 120?
AdG: I think it’s very difficult to put an estimate on the difficulty of substantial human intelligence enhancement, because there are so many ways in which one could imagine it being done, depending on what we choose to mean by "intelligence", how prosthetic the enhancement is allowed to be, etc. But the way you ask the question implies that you’re talking about exponentially accelerating enhancement of human intelligence, which of course narrows the options a lot. I don’t see any way that that could be done other than by interfacing the brain with exponentially more intelligent digital hardware – but then the question become whether the sophistication of the interface matters very much for functionality. I suspect it doesn’t, i.e. that such enhancement will not be very different from having that hardware be autonomous and us interfacing with it by the primitive means that we use today.
h+: Your talk at the Singularity Summit will be called "The Singularity and the Methuselarity: Similarities and Differences." Without giving too much away, can you give us a brief teaser of your talk? What is a "Methuselarity"?
AdG: The Methuselarity is a name recently my friend Paul Hynek has given to the point at which we reach what I have called longevity escape velocity. Longevity escape velocity (LEV), in turn, is the rate at which therapies to repair the molecular and cellular damage of aging need to be improved in order to stop their recipients from becoming biologically older. At present, LEV is very high, far higher than the rate at which we are actually improving our regenerative medicine against aging. However, it turns out that the further we progress in developing such "rejuvenation therapies" (in terms of the number of years by which they postpone age-related ill-health), the lower LEV becomes. Because of this, once we first achieve LEV, it is vanishingly unlikely that we will ever fall below LEV thereafter. Accordingly, the achievement of LEV will be a unique event, worthy of a Singularityesque name.
h+: The idea of extreme life extension is closely connected to the Singularity meme. To what extent do you think that technological progress in computers and bioinformatics is pushing along life extension research?
AdG: Well, first of all I would like to qualify your initial statement. I think there is actually not all that much in common between life extension and accelerating change: the defeat of aging will really be just one event in the progress of technology, albeit a particularly momentous one. I therefore think that the only strong connection between the two is in the similarity of mindset and of attitudes to the future that attracts people to the two themes.
As for your question: Bioinformatics is playing a modest but not central role in hastening progress in the biotechnological approach to postponing aging that I pursue. More general progress in developing full-blown artificial intelligence, however, may well result in a much more dramatic hastening of the defeat of aging, if computers can be created that are much smarter than we are and thus able to solve the trickier problems inherent in postponing aging much faster than we can. I therefore strongly support such research.
h+: Before you became a biogerontologist, you were an AI researcher. According to Wikipedia, in 1986 you "co-founded Man-Made Minions Ltd. to pursue the development of an automated formal program verifier." Do you think that today’s AI researchers are any closer to unlocking the secrets of general intelligence than they were 23 years ago?
AdG: I think we’re closer, yes – but are we much closer? I don’t think we’ll be able to answer that question until we have genuine results — systems that exhibit really sharply greater cognitive function than anything that exists today. But it’s important to understand that my work — even in the sense of the long-term goals that I was doing software verification as the first step towards — was not actually focused on AGI as we use the term today. Rather, as the name of my company indicates, it was focused on creating machines with enough common sense to relieve us of the tedious aspects of the human condition as we know it today, but not to rival us (let alone exceed us) in the creative sense. I’m still quite doubtful that it would, in fact, be desirable to create machines with sufficiently general intelligence to merit being considered as conscious.
Aubrey de Grey appears at Singularity Summit 2009, October 3-4 in New York City.