Best of H+: The Reluctant Transhumanist

The Reluctant Transhumanist

Singularity, 2012: God springs out of a computer to rapture the human race. An enchanted locket transforms a struggling business journalist into a medieval princess. The math-magicians of British Intelligence calculate demons back into the dark. And solar-scale computation just uploads us all into the happy ever after.

Stripped to the high concept, these visions from Charlie Stross are prime geek comfort food. But don’t be fooled. Stross’ stories turn on you, changing up into a vicious scrutiny of raw power and the information economy.

The “God” of Singularity Sky is really just an Artificial Intelligence, manipulating us all merely to beat the alien competition. The Merchant Princes (from a series of novels by Stross) are just as rapacious as anything on Wall Street, and a downstream parallel universe is just another market to exploit. The Atrocity Archives gives us a gutpunch full of paranoia — on the far side of hacking and counterhacking lurks an unspeakable chaos. And for all our engineering genius, Accelerando’s paradise is won at the cost of planetary destruction, with humanity cul-de-sac’d as our future heads off into the stars without us.

For his latest novel, Halting State (released in June 2008), Stross savages the fantasy worlds we escape into for fun and profit and invites us to peek underneath the surfaces as our chattering gadgets dress up reality with virtual sword-and-sorcery games, all underwritten by oh-so-creative financial instruments.

All of Stross’s highly connective pipe-dream superstructures are wide open to the one geopolitical prick that will pop them all like the balloon animals they are. Be warned. Take care of the bottom line, or your second life will cost you the life that counts.

It’s no surprise that Stross is a highly controversial figure within Transhumanist circles – loved by some for his dense-with-high-concepts takes on themes dear to the movement, loathed by others for what they see as a facile treatment of both ideas and characters. But one thing is certain –- Mr. Stross is one SF writer who pays close attention to the entire plethora of post-humanizing changes that are coming on fast. As a satirist, he might be characterized as our Vonnegut, lampooning memetic subcultures that most people don’t even know exist.

H+: With biotech, infotech, cognitive science, AI, and so many other sciences and technologies impacting the human situation, it seems that most social and political discourse remains back in the 20th century at best. You talk sometimes about being a post-cyberpunk person. How do you deal with the continued presence of so many pre-cyberpunk people?

CHARLIE STROSS: As William Gibson noted, “the future is already here: it’s just unevenly distributed.” Most people run on the normative assumption that life tomorrow will be similar to life today, and don’t think about the future much. And I’m not going to criticize them for doing so; for 99.9% of the life of our species this has been the case, barring disasters such as plague, war, and famine. It’s a good strategy, and periods when it is ignored (such as the millennial ferment that swept Europe around 990 A.D. and didn’t die down until 1020 A.D.) tend to be bad times to live.

Unfortunately, for about the past 200 years — that’s about 0.1% of H. sapiens’ life span as a species – that strategy has been fundamentally broken. We’ve been going through a period of massive technological, scientific, and ideological change, and it has invalidated the old rule set. But even so, at a day-to-day level, or month-to-month, things don’t change so much. So most people tend to ignore the overall shape of change until it’s impossible to ignore. Then they try to apply the old rules to new media or technologies, make a hopeless mess of things, and start on a slow and painful learning process. It’s been quite interesting to watch the slow progress toward an international consensus on certain aspects of Internet culture, for example. In that context, I suspect the mainstream is only a decade or so behind the cutting edge: the debates over spam and intellectual property that the geeks were having in the early 1990s are now mainstream. (Of course, a decade feels like an eternity when you’re up close and personal with it.)

H+: Remaining on the cyberpunk tip for a moment, Gibson’s Neuromancer (the whole trilogy, really) popularized a trendy subculture that impacted on both entertainment and actual technology. Do you think that Accelerando could have that effect? Do you see yourself as a popularizer of memes that are just taking root?

CS: Naah.

A chunk of Accelerando was extracted in raw juicy nuggets from my time on the extropians mailing list in the early to midnineties; another chunk came out of my time in the belly of a dot-com’s programming team in the late nineties. I wanted to get my head around the sense of temporal compression that was prevalent in the dotcom era, of the equivalent of years flickering past in months. But it’s too dense for the mainstream. As we’ve already noticed, a lot — probably the majority — of people aren’t interested in change; in fact, they find it frightening. And Accelerando compressed so many ideas into such a small space (I think there’s about 0.5 to 1 novel’s worth of ideas per chapter in each of its nine chapters) that it’s actively hostile to most readers. Some people love it, those who’re already into that particular type of dense fiction-of-ideas, but many, even seasoned SF readers, just turn away.

I would like to hope that I’ve gone some way toward changing the terrain within the SF genre itself, though. Robert Bradbury’s concept of the Matrioshka Brain (or Jupiter Brain, in earlier iterations) is one of the most marvelous SF concepts I’ve run across in a long time, and not trivially easy to refute. I wanted to get past the then-prevalent idea that you couldn’t write about a Vingean singularity — it’s difficult, but we’ve got tools for thinking about these things. And I got the idea of computronium into common enough parlance that Rudy Rucker recently took a potshot at it, implying that it’s part of the universe of discourse in my field.

H+: I’m curious about the Economics 2.0 idea that is featured in Accelerando. What do you think about economic systems in a presumably post-human world? Do any of the theories – free market, Marxist, and so forth – that have guided those who ideologize these things continue to make sense after replicators and the like?

The Reluctant TranshumanistCS: In a nutshell, about Economics 2.0: economics is the study of the allocation of resources between human beings under conditions of scarcity (that is, where resources are not sufficient to meet maximal demand by all people simultaneously). Resource allocation relies on information distribution — for example, price signals are used to indicate demand (in a capitalist economic system). In turn, economic interactions within, for example, a market environment hinge on how the actors within the economic system use their information about each other’s desires and needs.

To get a little less nose-bleedingly abstract: say I am crawling through a desert and dying of thirst, and you happen to have the only bottled water concession within a hundred miles. How much is your water worth? In the middle of a crowded city with drinking fountains every five yards and competing suppliers, it’s worth a buck a bottle. But in the middle of a desert, to someone who’s dying of thirst, its value is nearly infinite. You can model my circumstances and my likely (dying-of-thirst) reaction to a change in your asking price and decide to hike your price to reflect demand. You can do this because you have a theory of mind, and can model my internal state, and determine that when dying of thirst, my demand for water will be much higher than normal. And this is where information processing comes into economic interactions.

What kind of information processing can vastly smarter-than-human entities do when engaging in economic interactions? In Accelerando I hypothesized that if you can come up with entities with a much stronger theory of mind than regular humans possess, then their ability to model consumer/ supplier interactions will be much deeper and more efficient than anything humans can do. And so, humans will be at a profound disadvantage in trying to engage in economic interactions with such entities. They’ll be participating in economic exchanges that we simply can’t compete effectively with because we lack the information processing power to correctly evaluate their price signals (or other information disclosures). Hence Economics 2.0 — a system that you needed to be brighter-thanhuman to participate in, but that results in better resource allocation than conventional economic systems are capable of.

H+: What do you think about transhumanism and singularitarianism as movements? Are these goals to be attained or just a likely projection of technologies into the future that we should be aware of?

CS: My friend Ken MacLeod has a rather disparaging term for the singularity; he calls it “The Rapture of the Nerds.”

This isn’t a comment on the probability of such an event occurring, per se, so much as it’s a social observation on the type of personality that’s attracted to the idea of leaving the decay-prone meatbody behind and uploading itself into AI heaven. There’s a visible correlation between this sort of personality and the more socially dysfunctional libertarians (who are also convinced that if the brakes on capitalism were off, they’d somehow be teleported to the apex of the food chain in place of the current top predators).

Both ideologies are symptomatic of a desire for simple but revolutionary solutions to the perceived problems of the present, without any clear understanding of what those problems are or where they arise from. (In the case of the libertarians, they mostly don’t understand how the current system came about, or that the reason we don’t live in a minarchist night-watchman state is because it was tried in the 18th and 19th centuries, and it didn’t work very well. In the case of the AI-rapture folks, I suspect there’s a big dose of Christian millennialism (of the sort that struck around 990–1010 A.D., and again in the past decade) that, because they’re predisposed to a less superstitious, more technophillic world-view, they displace onto a quasiscientific rationale.

Mind uploading would be a fine thing, but I’m not convinced what you’d get at the end of it would be even remotely human. (Me, I’d rather deal with the defects of the meat machine by fixing them — I’d be very happy with cures for senescence, cardiovascular disease, cancer, and the other nasty failure modes to which we are prone, with limb regeneration and tissue engineering and unlimited life prolongation.) But then, I’m growing old and cynical. Back in the eighties I wanted to be the first guy on my block to get a direct-interface jack in his skull. These days, I’d rather have a firewall.

H+: You said “I’d be very happy with cures for senescence, cardiovascular disease, cancer, and the other nasty failure modes to which we are prone, with limb regeneration, and tissue engineering and unlimited life prolongation.” It seems to me that this still puts you in the Transhumanist camp. Would you agree?

CS: To the extent that I don’t believe the human condition is immutable and constant then yes, I’m a Transhumanist. If the human condition was immutable, we’d still be living in caves. (And I have a very dim view of those ideologies and religions that insist that we shouldn’t seek to improve our lot.)

H+: Earlier on, you referred to the Matrioshka brain. Can you say a bit more about that and why you find it an appealing or, perhaps, realistic concept?

CS: As I said, the credit for the concept belongs to Robert Bradbury, who refined it further from discussions by Eliezer Yudkowsky and others in the mid-nineties, in turn based on speculation by Freeman Dyson going back as far as the 1960s.

Dyson first opened the can of worms by suggesting that we could make better use of the matter of the solar system by structuring it as free-flying solar collectors and habitats in variously inclined but non-intersecting orbits, which would trap the entire solar radiation output and give us access to mind-numbingly vast amounts of energy and inhabitable space.

The extropians took the idea one step further, with the idea of computronium — the densest conceivable form of matter structured to maximize computation. What amount of thinking can you get done by building a Dyson sphere, optimized to support computation rather than biological life? Bradbury suggested building multiple concentric spheres of free-flying compute nodes, each shell feeding off the waste heat of the next layer in. Some estimates of the computing power of such a Matrioshka Brain (named after the nested Russian dolls) suggest that it would be roughly as far beyond us — the entire human species — as we are beyond a single nematode worm.

Back in the eighties I wanted to be the first guy on my block to get a direct-interface jack in his skull. These days, I’d rather have a firewall.

If the idea of procedural artificial intelligence holds water, it’s possible that a Matrioshka Brain (or something like it) is going to turn out to be the end state of any tool-using civilization: after all, the bulk of the mass of which our planet is composed is of no use to us whatsoever (other than insofar as it makes a dent in spacetime for us to stick to), never mind the rest of the solar system…

H+: Moving on, your latest novel, Halting State is all about different levels of reality. LARPs and Second Life, office politics, the “mammalian overlay” of sexual seduction, financial instruments: they’re all artificial realities, one layer on top of each other, and all interacting. It’s sort of like what we used to think of as a spiritual realm, but it’s right here running on TCP/IP. It used to be only shamans and schizophrenics who had these sorts of visions, but now, if you’re wearing the special specs, we all get to share this world that’s haunted by imaginary beings. I think of Arthur C. Clarke’s notion that a sufficiently advanced technology is indistinguishable from magic. Do you think the areas and powers that we’re opening up will change us?

CS: What makes you think it’s about us?

We’re human 1.0. We’re not going there. Or we may go down that road, but the things that arrive at the other end won’t be us. (They might remember having started out as us, but I’m not betting on it.)

H+: There’s a nasty little idea buried in Halting State, I think. Like: if you think things are bad when people get their ideas about reality from TV, wait until our imaginations are completely colonized, surveilled and programmed. Our hero bleakly opines, that this is the reason for the Fermi Paradox. There are no signs of alien life because you get so far and then vanish up your own artificial reality. Have I got that right? And is that a prediction?

CS: I try not to make predictions — but I see that one as a distinct possibility (and indeed, as yet another solution to the Fermi Paradox).

 

6 Responses

  1. Bitrat says:

    “singularitarianism”…..gotta love it! Has a recursive built right in, and rolls off the tongue like a well-honed beat poem……
    I imagine a ragged group of people carrying signs
    with cryptic and illegible messages chanting “singularitarianism, singularitarianis…”

  1. October 25, 2012

    […] Does the nature of temes merit serious study for any student of brain sciences, philosophy, linguistics… or even fine arts, politics or economics? […]

Leave a Reply