The Greatest Good for the Greatest Number: an Interview with Thomas McCabe, Humanity Plus Program Director

Thomas McCabe is one of three new Directors — along with Max More and Howard Bloom — of Humanity+, the nonprofit that publishes H+ Magazine.

As Program Coordinator, he is responsible for assisting the board in establishing and managing the day-to-day operations and internal processes of Humanity+, including establishing employee requirements and managing employees. Thomas, Humanity+ Chairman Ben Goertzel, and Amy Li are currently planning for the upcoming Humanity+ conference on December 4-5 at Caltech, which you can register for here.

H+:  Thomas, you got into transhumanism at a very young age.   What inspired you and why do you think you started thinking about such big issues?  And did you have anybody in your physical proximity to talk to?

Thomas McCabe: I’ve always thought about big issues in general, going back to elementary school. The reason transhumanism and the Singularity caught my attention, back in 2003, was that they struck me as a way to have a very large impact on the world, starting from a relatively small resource base. Technology always works like that — the people and ideas that get in during the initial growth stages of the industry wind up dominating. Harvard is 400 years old, and it’s still the #1 ranked university in America (disclaimer: I’m a Yalie). Ford is 100 years old and it’s still the #1 American car company. Dell, Microsoft and Compaq still dominate the PC industry, and so forth.

I hated adolescence in general and middle/high school specifically. I do think a nontrivial portion (though certainly not all!) of the reason was that I had all of these interesting ideas about the future of humanity, and no one to talk to. The universe that we jam young people into nowadays is very small.

In my day-to-day life, of course, most people still aren’t interested in transhumanism, but I don’t really mind. If there are 6,000 students at Yale and 1% of them are interested — that’s sixty people. But children don’t really have that big pool to draw from. I think that’s largely why so many geniuses disliked school and were mostly self-taught, most famously Einstein, and including our own Eliezer
Yudkowsky.

H+:  Staying on the age tip for the moment, you are probably the youngest member of the Humanity+ Board of Directors ever.  Do you bring a fresh perspective to H+ and is there a generational difference that you’ve noticed within transhumanism, more generally?

TM: I’d like to think that I bring in a fresh perspective, but I want to make sure I’m not needlessly flattering myself. I do think that it’s probably good for more young people to be involved in Humanity+, which is why I pushed for Bryan Bishop to be appointed as Assistant Director of R&D. As far as I know, everyone on the Board except me is over 30 (I turned 19 last summer), and a majority are over 40. That seems suboptimal for an organization that wants to be focused on the latest developments in technology and the future fifty or more years out.

I have noticed that the people who work for the Singularity Institute, which is how I originally got involved in transhumanism, are almost all young, rarely over 30 or 35. I don’t think we should have so much of a gap there, given that we share a large number of the same memes. With Singularity Institute president Michael Vassar, I am currently working together to forge closer ties within what I like to call the “technoactivist” community. I just made up that word because there wasn’t an existing one, but I think it would really help us a lot to have a single word — “technoactivism” or something else — that means seriously looking at the technologies we’re likely to develop over the next century, and trying to influence their development. That’s what SIAI and FHI and the Immortality Institute and Humanity+ and IEET and so on all have in common. Though, of course, our opinions differ widely on how technology development should be influenced.

H+: You volunteer for a wide variety of transhumanist organizations (including H+ magazine… thank you), but could you define a singular interest or passion or discipline within the context of transhumanism that you think you want to focus on?   Do you plan on doing scientific research or tech development or the like?

TM: I plan on accomplishing the greatest good for the greatest number. One of the main planks of my philosophy is: the world is a very big place, so you can always do more, and so if what you are doing is a good idea, it should be done on a grand scale. If you can free one slave, why not free a thousand? If you can cure one case of malaria, why not cure a million? If you can make a widget, why not make a billion?

Within the context of transhumanism, I think that translates into building a world where we have competent, well-informed people guiding our society’s technological development. Currently, the people in charge of research and technology funding don’t even recognize the utility of curing aging, a disease that takes thirty-five million lives every year. At the same time, there’s no one who can say “this technology looks like it will lead to individual terrorists having the power to destroy continents, so let’s not have anyone fund it”. My ultimate end goal is to achieve that, and then once that is done, everything else will follow in fairly short order.

To some, this might sound like a negative thing, because it would involve restricting some lines of research which are likely to be dangerous. However, the historical record is that, under our current regime of research and innovation, the very first time we developed a technology with the power to kill billions (nuclear weapons), we almost wiped out our civilization. We must, therefore, handle the development of these technologies differently.

H+: It sounds like you’re talking about relinquishment.  But some of the technologies with the highest potential for destruction also have the greatest potential for transforming the human condition.  For instance, nanotechnology could be very dangerous to all life… and it could end resource scarcity and disease.  Not ending resource scarcity and disease could also be very dangerous to all life, as it can lead to panic and war with powerful weapons.  What do you think about the proactionary principle as a guide?

TM: Currently, one of the main mistakes that the people guiding technological development are making is paying too much attention to very low-level risks. If you’re dying of a terminal disease, and the FDA refuses to approve your treatment because they don’t know if the drug might cause side effects after ten years, they’ve effectively killed you. I fully support eliminating those sorts of regulatory frameworks which are designed to respond to, not actual risk, but merely media-amplified public perception of risk.

There are no technologies I know of which are so dangerous, and of so little benefit, that the correct response is to never do any research on them ever. However, if a technology is obviously dangerous, then we as a species simply must have enough unity of will to implement sane policies regarding that technology, at least if we wish to survive the next millennium. Of course, what sane policies are varies from time to time and place to place. In 1940, it was obviously the correct decision to push ahead with nuclear research as fast as possible, because what we wouldn’t do, the Nazis would. On the other side, waiting twenty-five years to sign the Nuclear Non-Proliferation Treaty was (in my opinion) dumb.

Quite frankly, the idea that *not* developing a particular technology would very probably result in panic or global war, strikes me as being implausible. Suppose the laws of physics were slightly different, so that the technology in question was physically impossible or impractical to build. Would, then, our entire species be condemned to live in global chaos forever?

The benefits of many technologies are, of course, immense, and that’s why I think it’s important to distinguish between technologies where risk and benefit are comparable, and technologies where they are grossly out-of-alignment. For instance, new medicines tend to be in the former group: a new, untested drug might cure your disease, or it might make your disease worse, or give you an additional disease. With these sorts of technologies, our existing frameworks work fairly well, and in many cases restrictions have obviously become too strict over the past fifty years. If heroin makes you addicted and drives you crazy, we’ll figure it out soon enough, and the level of harm done will be relatively minimal, even without government intervention. Ditto for most self-modification tech, since whatever you do wrong, you’ll only screw yourself up.

On the other hand, with nanotechnology, we simply cannot have a situation where anyone can buy a device that can kill a billion people with the push of a few buttons, no matter how much good it might do the purchaser. No matter what else we do, that cannot happen, or we will rapidly go extinct. So we, as a species, need to be able to implement policies that prevent that from happening. A transparent society, a totalitarian new world order, regulation by the UN, a corporate oligopoly, universal cooperation among universities and funding agencies… there are infinitely many options, but we do need something to stop that from happening, and right now there isn’t anything at all.

H+: Do you think a lot about how future developments will affect you personally, or do you think more abstractly about society and humanity and so forth?  Or is it fairly balanced?

TM: On a personal level, of course I’d like a jetpack, but there’s not much I can do to speed that up. Even if I made a hundred million dollars and threw it into a jetpack R&D program, how big of an effect would that really have? I might get the jetpack in 24 years instead of 27.

On the other hand, small decisions can have a big effect on a societal scale, because the benefit of the technology is multiplied across everyone who uses it. If a hundred million people get their jetpacks three years earlier, that’s 300 million additional jetpack-years. (I shall now use the “jetpack-year” as my standard unit of enjoyment.) So yes, I do think (as a general rule) of personal stuff in the
short-term and future developments in the long term.

H+: What ideas have blown your mind in the last year or so?

TM: I have, to be blunt, probably become more cynical over the past year, caused by sheer incredulity at how many stupid decisions we have collectively made. It really is quite amazing, for instance, that Germany once democratically elected a fairly obvious mass-murdering psychopath. On the other hand, this means that life in the future could be much better, in ways that aren’t obvious just by looking at differences in technology. The Enlightenment gave us electricity and cars, but it also gave us the idea that government and society should be based on reason. If you look at the obituaries of ancient Romans, or actually go and read the Old Testament, you will find that people used to consider it normal to boast about how many townsfolk they murdered while sacking a city, as one of their greatest accomplishments in life. It’s nice to live in a time when people don’t do that anymore.

H+: In the nearer term, over the next few years, what do you think Humanity+ needs to focus on?

TM: I think that one of the big things we need to focus on is becoming more organized, and working to create a more cohesive community. There’s already a very wide-ranging group of thousands of transhumanists, all over the world, with different ideas, different goals and different skills. However, this doesn’t mean very much when this (generally very intelligent and capable) group of people can’t really work together effectively, because there are no systems set up for mass communication and coordination.

As I mentioned in my  “campaign speech,” I think we should also start looking into technology research and development ourselves. I think this really is one of the areas where a nonprofit can have a significant impact, because science is (outside of medicine, defense, and highly specialized for-profits like semiconductors) so grossly underfunded. The entire National Science Foundation has an annual budget of only $6B. It would be hard for a nonprofit to become as large as Google or Microsoft, but having 10% of the NSF’s budget and focusing it on the highest-impact transhumanist technologies looks very doable.

Leave a Reply