One of the most venerable dreams of science fiction is that people might become immortal by uploading their personalities into some kind of lasting storage. Once your personality is out of your body in a portable format, it could perhaps be copied onto a fresh tank-grown blank human body, onto a humanoid robot or, what the heck, onto a pelican with an amplified brain. Preserve your software, the rest is meat!
In practice, copying a brain would be very hard, for the brain isn’t in digital form. The brain’s information is stored in the geometry of its axons, dendrites and synapses, in the ongoing biochemical balances of its chemicals, and in the fleeting flow of its electrical currents. In my early cyberpunk novel Software, I wrote about some robots who specialized in extracting people’s personality software ⎯ by eating their brains. When one of my characters hears about the repellent process, “[His] tongue twitched, trying to flick away the imagined taste of the brain tissue, tingly with firing neurons, tart with transmitter chemicals.”
In this article, I’m going to talk about a much weaker form of copying a personality. Rather than trying to exactly replicate a brain’s architecture, it might be interesting enough to simply copy all of a person’s memories, preserving the interconnections among them.
We can view a person’s memory as a hyperlinked database of sensations and facts. The memory is structured something like a website, with words, sounds and images combined into a superblog with trillions of links. I don’t think it will be too many more years until we see a consumer product that makes it easy for a person to make a copy of their memory along these lines. This product is what I call a lifebox.
My idea is that your lifebox will prompt you to tell it stories, and it will have enough low-level language recognition software to be able to organize your anecdotes and to ask you follow-up questions. As the interviews progress, the lifebox’s interviewer-agent harks back to things that you’ve mentioned, and creates fresh questions pairing topics together. Now and then the interviewer-agent might throw in a somewhat random or even dadaistic question to loosen you up. As you continue working with your lifebox, it builds up a database of the facts you know and the tales you spin, along with links among them. Some of the links are explicitly made by you, others will be inferred by the lifebox software on the basis of your flow of conversation, and still other links are automatically generated by looking for matching words.
And then what?
Your lifebox will have a kind of browser software with a search engine capable of returning reasonable links into your database when prompted by spoken or written questions from other users. These might be friends, lovers or business partners checking you out, or perhaps grandchildren wanting to know what you were like.
Your lifebox will give other people a reasonably good impression of having a conversation with you. Their questions are combed for trigger words to access the lifebox information. A lifebox doesn’t pretend to be an intelligent program; we don’t expect it to reason about problems proposed to it. A lifebox is really just some compact digital memory with a little extra software. Creating these devices really shouldn’t be too hard and is already, I’d say, within the realm of possibility ⎯ it’s already common for pocket-sized devices to carry gigabytes of memory, and the terabytes won’t be long in coming.
I discussed the lifebox at some length in my Y2K work of futurology, Saucer Wisdom, a book in the form of a novel, framed in terms of a character named Frank Shook who has a series of glimpses into the future ⎯ thanks to some friendly time-traveling aliens who take him on a tour in their tiny flying saucer. (And, no, I’m not a UFO true believer, I just happen to think they’re cute and enjoyably archetypal.)
You might visualize a lifebox as a little black plastic thing that fits in your pocket. It comes with a lightweight clip-on headset with a microphone and earphone. It’s completely non-technical, anyone can use a lifebox to create their life story, to make something to leave for their children and grandchildren.
In my novel, my character Frank watches an old man using a lifebox. His name is Ned. White-haired Ned is pacing in his small backyard — a concrete slab with some beds of roses — he’s talking and gesturing, wearing the headset and with the lifebox in his shirt pocket. The lifebox speaks to him in a woman’s pleasant voice.
The marketing idea behind the lifebox is that old duffers always want to write down their life story, and with a lifebox they don’t have to write, they can get by with just talking. The lifebox software is smart enough to organize the material into a shapely whole. Like an automatic ghost-writer.
The hard thing about creating your life story is that your recollections aren’t linear; they’re a tangled banyan tree of branches that split and merge. The lifebox uses hypertext links to hook together everything you tell it. Then your eventual audience can interact with your stories, interrupting and asking questions. The lifebox is almost like a simulation of you. And over time, a lifebox develops some rudimentary simulations of its individual audience members as well — the better to make them feel they’re having conversations with an intelligent mind.
To continue his observations, my character Frank and his friends skip forward in time until after Ned has died and watch two of Ned’s grandchildren play with one of the lifebox copies he left behind:
Frank watches Ned’s grandchildren: little Billy and big Sis. The kids call the lifebox “Grandpa,” but they’re mocking it too. They’re not putting on the polite faces that kids usually show to grown-ups. Billy asks the Grandpa-lifebox about his first car, and the lifebox starts talking about an electric-powered Honda and then it mentions something about using the car for dates. Sis — little Billy calls her “pig Sis” instead of “big Sis” — asks the lifebox about the first girl Grandpa dated, and Grandpa goes off on that for awhile, and then Sis looks around to make sure Mom’s not in earshot. The coast is clear so she asks some naughty questions about Grandpa’s dates. Shrieks of laughter. “You’re a little too young to hear about that stuff,” says the Grandpa-lifebox calmly. “Let me tell you some more about the car.”
My character Frank skips a little further into the future, and he finds that lifeboxes have become a huge industry. People of all ages are using lifeboxes as a way of introducing themselves to each other. Sort of like home pages. They call the lifebox database a context, as in, “I’ll send you a link to my context.” Not that most people really want to spend the time it takes to explicitly access very much of another person’s full context. But having the context handy makes conversation much easier. In particular, it’s now finally possible for software agents to understand the content of human speech — provided that the software has access to the speakers’ contexts.
Coming back to the idea of saving off your entire personality that I was initially discussing, there is a sense in which saving only your memories is perhaps enough, as long as enough links among your memories are included. The links are important because they constitute your sensibility, that is, your characteristic way of jumping from one thought to the next.
On their own, your memories and links aren’t enough to generate an emulation of you. But if another person studies your memories and links, that other person can get into your customary frame of mind, at least for a short period of time. The reason another person can plausibly expect to emulate you is that, first of all, people are universal computers and, second of all, people are exquisitely tuned to absorbing inputs in the form of anecdotes and memories. Your memories and links can act as a special kind of software that needs to be run on a very specialized kind of hardware: another human being. Putting it a bit differently, your memories and links are an emulation code.
Certainly exchanging memories and links is more pleasant than having one’s brain microtomed and chemically analyzed, as in my novel Software.
I sometimes study an author’s writings or an artist’s works so intensely that I begin to at least imagine that I can think like them. I even have a special word I made up for this kind of emulation; I call it twinking. To twink someone is to simulate them internally. Putting it in an older style of language, to twink someone is to let their spirit briefly inhabit you. A twinker is, if you will, like a spiritualistic medium channeling a personality.
Over the years I’ve twinked my favorite writers, scientists, musicians and artists: Robert Sheckley, Jack Kerouac, William Burroughs, Thomas Pynchon, Frank Zappa, Kurt Gödel, Georg Cantor, Jorge Luis Borges, Edgar Allan Poe, Joey Ramone, Phil Dick, Peter Bruegel, etc. The immortality of the great ones results from faithful twinking by their aficionados.
Even without the lifebox, if someone doesn’t happen to be an author, they can make themselves twinkable simply by appearing in films. Thomas Pynchon captures this idea in a passage imagining the state of mind of the 1930s bank-robber John Dillinger right before he was gunned down by federal agents outside the Biograph movie theater in Chicago, having just seen Manhattan Melodrama starring Clark Gable.
John Dillinger, at the end, found a few seconds’ strange mercy in the movie images that hadn’t quite yet faded from his eyeballs ⎯ Clark Gable going off unregenerate to fry in the chair, voices gentle out of the deathrow steel so long, Blackie … there was still for the doomed man some shift of personality in effect ⎯ the way you’ve felt for a little while afterward in the real muscles of your face and voice, that you were Gable, the ironic eyebrows, the proud, shining, snakelike head ⎯ to help Dillinger through the bushwhacking, and a little easier into death.
The effect of the lifebox would be to make such immortality accessible to a wider range of people. Most of us aren’t going to appear in any movies, and even writing a book is quite hard. Again, a key difficulty in writing any kind of book is that you somehow have to flatten the great branching fractal of your thoughts into a long line of words. Writing means converting a hypertext structure into a sequential row ⎯ it can be hard even to know where to begin.
As I’ve been saying, my expectation is that in not too many years, great numbers of people will be able to preserve their software by means of the lifebox. In a rudimentary kind of way, the lifebox concept is already being implemented as blogs. People post journal notes and snapshots of themselves, and if you follow a blog closely enough you can indeed get a feeling of identification with the blogger. And many blogs already come with search engines that automatically provide some links. Recently the cell phone company Nokia started marketing a system called Lifeblog, whereby a person can link and record their daily activities by using a camera-equipped cell phone. And I understand that the Hallmark corporation, known for greeting cards, is researching an online memory-keeping product.
Like any other form of creative endeavor, filling up one’s lifebox will involve dedication and a fair amount of time, and not everyone will feel like doing it. And some people are tongue-tied or inhibited enough to have trouble telling stories about themselves. Certainly a lifebox can include some therapist-like routines for encouraging its more recalcitrant users to talk. But lifeboxes won’t work for everyone.
What about some science fictional instant personality scanner, a superscanner that you wave across your skull and thereby get a copy of your whole personality with no effort at all? Or, lacking that, how about a slicer-dicer that purees your brain right after you die and extracts your personality like the brain-eaters of Software? I’m not at all sure that this kind of technology will ever exist. In the end, the synaptic structures and biochemical reactions of a living brain may prove too delicate to capture from the outside.
I like the idea of a lifebox, and I have already made a primitive version of Rudy’s Lifebox myself, which you can find online. My personal pyramid of Cheops. I see the ultimate version of my lifebox as a website or a cloud-based application that includes a large database with all my books, all my journals, some years of blog entries, and a connective guide/memoir ⎯ with the whole thing annotated and hyperlinked. And I might as well throw in my photographs, videos and sound-recordings ⎯ I’ve taken thousands of photos over the years.
It should be feasible to endow my lifebox with enough interactive abilities; people could ask it questions and have it answer with appropriate links and words. Off-the-shelf Google site-search box does a fairly good job at finding word matches. And it may be that the Wolfram|Alpha search engine — which purportedly has some measure of natural language comprehension — can soon do better.
For a fully effective user experience, I’d want my lifebox to remember the people who talked to it. This is standard technology — a user signs onto a site, and the site remembers the interactions that the user has. In effect, the lifebox creates mini-lifebox models of the people it talks to, remembering their interests, perhaps interviewing them a bit, and never accidentally telling the same story twice — unless prompted to.
If I’m dead by the time my lifebox begins receiving heavy usage, then in some sense I’m not all that worried about getting paid by my users. Like any web or cloud-based application, one could charge a subscription fee, or interrupt the information with ads.
If I use my lifebox while I’m still alive, some other options arise. I might start letting my lifebox carry out those interview or speaking gigs that I don’t have the time or energy to fulfill. Given that many bits of this paper, “Lifebox Immortality,” are in fact excerpted and reshuffled from my other writings, it’s conceivable that my lifebox actually wrote this paper.
Moving on, my lifebox could be equipped to actively go out and post things on social networking sites, raising my profile on the web and perhaps garnering more sales of my books and more in-person speaking invitations. This could of course go too far — what if my lifebox became so good at emulating me that people preferred its outputs to those of own creaky and aging self?
But I don’t, however, see any near-term lifebox as being a living copy of its creator. At this point, my lifebox will just be another work of art, not so different from a bookshelf of collected works or, once again, like a searchable blog.
Looking further ahead, how would one go about creating a human-like intelligence? That is, how would we animate a lifebox so as to have an artificial person?
A short answer is that, given that our brains have acquired their inherent structures by the process of evolution, the likeliest method for creating intelligent software is via a simulated process of evolution within the virtual world of a computer. There is, however, a difficulty with simulated evolution — even with the best computers imaginable, it may take an exceedingly long time to bear fruit.
An alternate hope is that there may yet be some fairly simple model of the working of human consciousness which we can model and implement in the coming decades. The best idea for a model that I’ve seen is in On Intelligence by Jeff Hawkins and Sandra Blakeslee. Their model describes a directed evolution based upon a rich database that develops by continually moving to higher-level symbol systems.
For now in any case, it would help the progress of AI to create a number of lifeboxes. It may well be that these constructs can in fact serve as hosts or culture mediums where we can develop fully conscious and intelligent minds.
But for now, even without an intelligent spark, a lifebox can be exceedingly lifelike.
At the very least — as my friend Leon Marvell has pointed out in our joint essay — we’ve invented a great new medium.
Rudy Rucker, Software, (Ace Books, New York 1982), p. 36. Reprinted in The Ware Tetralogy. In quantum information theory there’s a quite different kind of discussion concerning whether it would be possible to precisely copy any physical system such as a brain. The so-called No-Cloning Theorem indicates that you can’t precisely replicate a system’s quantum state without destroying the system. If you had a quantum-state replicator, you’d need to destroy a brain in order to get a quantum-precise copy of it. This said, it’s quite possible that you could create a behaviorally identical copy of a brain without having to actually copy all of the quantum states involved.
I first used the word in a short story, “Soft Death” (The Magazine of Fantasy and Science Fiction, September, 1986).
Rudy Rucker, Saucer Wisdom Tor Books, New York 1999) pp. 57 – 59
Thomas Pynchon, Gravity’s Rainbow, (Viking Press, New York 1973) p. 516
Jeff Hawkins and Sandra Blakeslee, On Intelligence, (Times Books, New York 2004).
Leon Marvell’s remarks can be found in our joint paper, “Lifebox Immortality and How We Got There,” in Re:Live Media Art Histories 2009, edited by Sean Cubitt and Paul Thomas, published by The University of Melbourne & Victorian College of the Arts and Music, and available online in a Creative Commonsedition as well.