Meatbots Versus the Doucheborgs. Writer Vance Woodward Takes a New Slant on the Singularity
[Editor’s note: from Vance’s Fantastic Future blog, the Meatbot’s guide predicts:
- The rising of sentient (as in, alive) superhumanly intelligent computers.
- Full planetary automation.
- Everyone having off-the-charts IQs.
- Better-than-the-real-thing imaginations.
- Reverse-aging therapies.
- Bonsai safari animals in your home.
- Fully immersive simulated reality.
- Mind and memory sharing with your friends.
And further Vance suggests:
- The Singularity has already happened.
- You are already in a computer simulation of Earth history … right now.
- Death is just a temporary shutoff, and one you’ll never notice. Indeed you can’t die.]
Meatbots Versus the Doucheborgs. Writer Vance Woodward Takes a New Slant on the Singularity
I’m probably like a few thousand other people who came in contact with the idea of the Singularity through the works of Ray Kurzweil. I suppose it makes me something of a wannabe transhuman-singularitarian-arianarian.
Over the last few years, after reading books, blogs and articles — the tone, of which, typically ranges from morbidly depressing to mean — by dozens of Singularity writers and researchers, I’ve become less concerned about the rise of cyborgs and more concerned with the theoretical rise of what I call, the doucheborgs.
This theory goes as follows: Let an ultraintelligent douchebag be defined as a douchebag that can far surpass all the douchebaggy activities of any person however douchebaggy. Since the design of douchebags is one of these intellectual activities, an ultraintelligent douchebag could design even worse douchebags; there would then unquestionably be a ‘douchebag explosion.’
The transhuman movement is an uneasy alliance between narrow-minded engineers and wild-eyed prophets, of militant atheists and uncompromising mystics, who spend a great deal of their time raging about things like, the linguistic origins and definitions of the words, “transhuman” and “posthuman” and the top five inevitable dystopian ends of the world.
The result is there’s not a lot of fun and even less funny going on.
That’s why I was thrilled to see “The Meatbot’s Guide to the Technological Singularity: Prepare to Smartify” by blogger, Vance Woodward, in my Kindle list of releases. I’ve followed Vance’s blog, The Fantastic Future for awhile and always appreciated his optimistic take on the future. And, most of all, he’s funny and irreverent. He doesn’t have a gospel of Singularity. He can tell jokes that don’t exclusively deal with having sex with a robot. (I said exclusively.) He uses puns. He’s irreverent in a niche where only reverent irreverence is reverent.
Anyways, I thought I would bring Vance in and ask him a couple questions about the Singularity and his book.
Matt Swayne (MS): What drew you to the idea of a Technological Singularity?
Vance: When I was very young, I was quite enthralled with technology and related fields like space exploration. Space lego was easily my favorite toy. It’s hard to say why that was, in hindsight. You never know. Maybe I just liked grey and blue. Maybe I like starry skies. Regardless, being human, I soon came up with reasons to make my aesthetic leanings sound logical. So nowadays, I see technology as being the stuff that lets people do what they want to do, both individually and collectively. I reckon that, throughout the ages, technology has been a net good (I daresay indisputably so). Of course, technology now poses, or at least soon will pose, existential risks to humanity. That makes it at least a little important, maybe even as important as The Amazing Race … close anyways. So, just generally, the advance of technology is something we should all be thinking about and discussing. The Technological Singularity itself cranks up the significance of all this to the proverbial “11.” That draws my attention.
MS: What made you decide to write this book?
Vance: In short, I felt like I had something worthwhile to say. I reckon that plenty of other people have resoundingly shown that, within the next few decades the most powerful computers in the world will be millions and then billions of times more powerful than a human brain. So, that point is well covered. What isn’t well covered or agreed upon is what that means for us. It seems to me that things are going to be fantastic, mind-blowingly so. And I believe my opinion is well justified. Granted, exponentially advancing superhuman intelligence poses, oh, maybe a risk or two. Regardless, I believe that it will all play out favorably. And I wanted to share my conception of how and why things will likely play out very nicely, just in case it might make a few other people happy and excited about the future instead of otherwise. If nothing else, I figure (hope) that like-minded people will enjoy the book, and at least get some pleasure from it.
MS: What does your vision of the Technological Singularity look like?
Vance: I believe things will play out way more or less as follows: First, we’ll develop ultra-intelligent non-human intelligence and with it we will then figure out how to enhance human intelligence exponentially, essentially without limit. (Of course, we’ll sort of transition away from being homo sapiens in the process.) That will enable us to experience a sort of merging between super-realistic imaginations and super-realistic virtual realities. That’s to start. Next, I believe that we will all willingly share our memories and substrate, essentially allowing us to merge into a sort of singleton. I don’t anticipate this singleton will be a monolithic entity but maybe more of a steady-state equilibrium with intelligent beings merging and de-merging continuously. Somewhere in all this, we’ll design and erect enough substrate that, at least from a human-level of perception, a person would be able to experience thousands … millions … maybe billions of years of subjective time in seconds … microseconds … femto seconds of physical universe time. Maybe that’s overstating it. But maybe it’s understating it. Either way, I figure that we’ll in some sense obtain the ability to subjectively stop the clock of the physical universe (again, from a human level of perceptive and cognitive ability).
MS: What kinds of technology will drive it?
Vance: For the most part, it’ll be raw computational power. We’ll exhaust Moore’s law in the next few years, and then move onto whatever else happens to provide the biggest improvements for the fewest dollars. I don’t particularly have any sense of what exactly the next generation of computation substrate will be. But I have no doubt there will be a next generation. The economic incentives to find and create it are overwhelming. What’s more valuable than intelligence?
MS: What happens after the Singularity?
Vance: I don’t know. I reckon space exploration is a complete non-starter primarily because we’ll have such a thorough understanding of the laws of physics that we’ll be able to computationally model all varieties of life that could exist in the physical universe. In other words, we’ll be able to figure it all out right here at home in the comfort of what will become a planet-sized mega-brain. The other possibility (and I do fear this one) is that we might all, with our superhuman intelligence, come to realize that there’s no point to anything and just collectively switch off / bliss out. I go into this in more detail in the book. Maybe, we will develop a sufficiently robust intuitive understanding of time that we’ll come to think that human conceptualization of time (i.e., “after”) doesn’t even make sense. I have no clue how that could be. Maybe our concept of time won’t change so dramatically. But I do think that post-singularity existence will be that extremely different compared to the ideas we now have. It’s hard to imagine the particulars. But we can sort of get an idea of the magnitude, I think.
MS: Why are you optimistic? Aren’t you afraid of superintelligent machines ripping the carbon atoms from your body for use in its latest science projects?
Vance: This is probably one of those things where the truly honest answer is, “I don’t know why I’m optimistic,” or “I’m optimistic because it makes me feel good.” But, again, I can come up with all sorts of good (to me) sounding reasons for my optimism. I suppose I could sum it all with two general reasons. First, human societies get better (more democratic, more open, more rich, more smart) with technology. There’s more freedom and democracy on Earth now than ever before. I believe that is thanks to technology. Second, I see that technology now allows us all to share and process ever-greater amounts of information. We plebes have a much better idea of what’s going on in government and the world than ever before. And things are continuing to get better. I think the key to ensuring a good future will come through programming superhumanly intelligent computers with massive public and expert input. As long as we can keep super-advanced intelligence out of the hands of the few and in the hands of large groups, then it will all turn out good (or as good as possible). And I do think that is exactly what will happen.
MS: I find that there’s a deep tension between science and spirituality in the Singularity-Transhuman movement. You don’t seem to have any problem with mixing those two concepts in your book? Can you explain your philosophy on the blending of science-spirituality?
Vance: Yes, there’s clearly a tension between spirituality and technology because these days spirituality is basically defined (justifiably for the most part) as being a collection of obviously false claims about the potential to violate the laws of physics. There are many problems wrapped up in that: poorly defined terms and concepts, lazy thinking, still-festering upsets about the past, straw people, more than a few people on all sides needing something to hate.
Just as a superficial counterpoint, I’ve lately been reading Bill Bryson’s book, At Home: A Short History of Private Life. In it, he describes (a total aside to the main subject of the book) how, in 18th/19th century England the whole priest class had become a sort of highly educated group of people with lots of time to think and do … (no way!) … science! I’m bringing this up as an example of how it’s silly to simply hate on religion just as much as it’s silly to make laws-of-physics-violating claims about how the physical universe behaves.
I figure that, with superhuman intelligence, we’ll all sort out our differences of opinion and move forward. It’s not going to be a case of Jane proving once and for all to Joe how stupid Joe is, and then with Joe thanking Jill for showing Joe how stupid Joe is. Rather, we’ll all be so smart, that we’ll understand why people thought the way they thought. So, in a sense, we’ll all realize how logical everybody was being (which, if you’re a die hard physicalist, you’d have to admit is the case; everything has a cause and therefore is a logical result thereof).
I mean, we all need to realize that the current differences in sanity and intelligence between the dumbest of us and the smartest of us will be, in a post-singularity world, totally insignificant. We’ll look back at our old selves and see dumb, insane monkeys. I don’t mean we’ll feel hatred or contempt about it. It’ll just be part of history. So, the bottom line is that, for me, there’s no sense in hating on mystics or anybody else. We’re all victims of our micro-brains anyhow. Fortunately, I think we’re soon going to be able to effectively re-engineer them on the fly.
MS: The Singularity movement is gaining traction — but do you think it’s missing anything?
Vance: Not particularly. I suspect the vast majority of futurists are actually quite optimistic, but some people run into the problem of not wanting to let bad things happen as a result of being overly optimistic. So, we have a lot of probably optimistic people feeling like they need to voice their concerns about the dangers of technology out of a desire to prevent those dangers from actually happening. That’s sensible enough. But I don’t think anybody’s going to help bring about a better world by insisting that humans are greedy evil beings. I mean, I think people have tried that kind of approach once or twice in the past (e.g. communism and many other religions) and that approach never seems to work out very well.
Anyhow, it seems to me that we all agree (I hope!) that democracy, open debate, critical thinking, rationality, peer review, the scientific method, etc., etc., etc., are all good things. These kinds of values are, I think, so fundamentally important that nothing else really matters in comparison. And these values are very much present in the Singularity community.
Come to think of it, it’d be nice if there was more of a female influence in the culture. It might make the tenor of some discussions be a little less venomy and “YOU’RE A DUMBASS!!” sounding. I’d like that.
MS: What is a meatbot and how will it smartify?
Vance: A meatbot is a biological body (created by natural-selection mediated evolution). I call bodies meatbots in the book to highlight the fact that human bodies are machines. They are robots. And animal brains are computers. We meatbots are already smartifying thanks to increasing access to information and improving educational methods. I imagine that, within the next twenty years, we will develop seriously effective nootropics to boost our raw intelligence, sociability, mood and creativity considerably. But the truly profound enhancements will come once we can start directly linking external-to-the-skull hardware to our brains (preferably wirelessly, which will be a particularly trivial part of the feat). That will permit us to, essentially expand our brain without limit. That will come about though very robust MNE tech.
MS: One of the points in your book is that there may be multiple singularities, in fact, we might be part of a prior singularity. Can you explain this?
Vance: Yeah. Part of this comes out of what’s typically called the simulation hypothesis (though I did think of this on my own as I’m sure many people have done). You could also take something like Ray Kurzweil’s six epochs of evolution, and deem each level-up to be a singularity. You can also look at all the other civilizations in the galaxy and the universe (which I believe almost certainly have long since come and gone and come again and continue to exist). Each one of those civilizations had their own Technological Singularity, I imagine. And finally, we can, I think, assume that there will always be a “next big thing.” So, post-singularity, we’ll go on to evolve and reach some other game-changing event. Anyhow, I wasn’t trying to make any sort of philosophical point by saying there may be multiple singularities. It’s just a matter of definition or metaphor or both. We could just as easily come of with different words for each level-up wherever and whenever it occurs.