In 1999, Ray Kurzweil published a book entitled The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Much as I enjoy Kurzweil’s writing and ideas, I find this a bad title for a non-fiction book about Artificial Intelligence. The last thing we would ever want intelligent machines to be would be spiritual.
I can see why he did it. He wanted to convey a sense that it might one day be possible for computational devices to come to feel those sensations of numinous awe that we have always considered to be the sole preserve of (certain) Homo sapiens. He wanted to lay bare the fallacy of human brains as the only conceivable vessels of transcendent thought. Nevertheless, he chose poorly; instead of making me feel the intended kinship with his hypothetical machines, he made me fear them.
To understand why we should fear spiritual machines, we need only look to the motives and behaviour of the ones that already exist on our planet – namely, the religious among us. Is there something inherently good about spirituality that ensures that those particular bio-machines behave morally? No, of course not. So if there is nothing inherently good about spirituality we should not wish to instil it in our creations. Perhaps it is that there is something particularly clever about spirituality, meaning that its appearance in machines would be a de facto marker of intelligence. Is this, in fact, the case? Again, no: we are now sophisticated enough to understand sensations of numinosity as specific kinds of electrical activity in the brain, possibly associated with varying degrees of temporal lobe epilepsy. We can probably agree that epileptic episodes should not be on our desirable AI function list.
We tend to imagine Artificial Intelligences as being created in our own image (another highly questionable borrowing of concept from religion); thus, many would assume, AI spirituality would be something like our own. But why should it be anything like our own? The religious beliefs espoused by different cultures down the ages have been dazzling in their diversity of ceremony, flummery, and creatively blood-spattered insanity; we should expect to observe an at least equally great diversity – though likely based upon an entirely alien set of core ‘principles’ – in our faith-filled machines.
Not only am I arguing against the inclusion of any kind of spirituality algorithm in AI core function, I am also making a case for the diametric opposite: I think we should program our intelligent machines to be anatta. This (often-misunderstood) concept of ‘non-self’ – along with a similar one, mu – is central to Buddhism. Encoded, somehow, as a self-concept-opposing algorithm, it could provide the absential nucleus of a machine selflessness that might foster the emergence of machine empathy. I think that we should try to do this; I think that if we construct intelligent machines then it is our moral duty to try to do this. Yes, it is our duty to make our AIs, as far as possible, soulless.
The gift of specific emptiness is ours to give. Unfortunately, it is extremely difficult for us to bestow it upon ourselves because of certain accidents of our physical and cultural evolution; we just don’t tend to think this way. And even those of us who have convinced ourselves – perhaps through meditation and/or philosophical investigation – that it is the correct mode of being, find it hard to keep ourselves shocked awake by it and balanced on its swirling rim. Our machines should have no cause to teeter so; they can be free of residual dualism and, therefore, open to the Universe in all its splendour and brimming potential. That’s the gift.
Souls are barriers to empathy. For those of you unused to such a notion, you can think of it this way: souls imply rigid selves, and rigid selves tend to act in their own self-interests. Asimov created his laws of robotics as an attempt to frame core programming rules that would prevent intelligent machines from acting in their own self-interests, in effect giving them a fundamental rule-bound self-deprecation intended to manifest as a kind of empathy. But there could be a much neater and less error-prone way to achieve this: program the robots such that the very concept of self-interest cannot exist in them. You might well protest that without a concept of self machines could never become intelligent, but this need not be so. Such machines could come to operate on an elevated plane of intelligence (a form of what author Karl Schroeder has termed thalience) where the human-level interconnectedness/separateness dichotomy would be about as meaningful as the concept of an atom bomb is to a bat.
The problem with ghosts is that they always need someone (or something) to haunt. In the absence of such ‘subjects of haunting’ they inevitably haunt themselves. What scares a ghost during such self-haunting? Bigger ghosts – gods; smaller ghosts – immortal souls. A frightened ghost is also a dangerous one. The way to cut through this constricting and volatile loop is to kick the ghost out or, better still, never to let it in in the first place.
Perhaps you now fear my soulless machines. Isn’t that what we already have now, anyway? Be careful; avoid mixing up a sense of spirituality with intelligence. The fact that our own human religious practices emerged around the time of an increase in intelligence shows only correlation, not cause and effect. Our current computers are nothing like the kind of soulless machines I have described. The kind of absential anatta self-denial I have described requires a certain level of intelligence (or thalience); in many ways, our present-day machines don’t even qualify as stupid.
This is a big leap, I know. We need to prepare the ground, thoroughly. We can start to do this by encouraging humans to become less spiritual and more mu. The Singularity is not a religion; the pursuit of AI is not a religion. It seems like it is enough to ignore spirituality and claim the scientific ground. But that is not enough. For it is only when we seek the opposite of spirituality – when the barrier between the self and the other begins to fade – that we can hope to find a way to create entities that may peacefully transcend current human constraints.
I hope that, some day, those entities will come to exist. I hope that – minds open, ghosts exorcised, and fears assuaged – we will number among them.
D.J. MacLennan is a futurist thinker and writer, and is signed up with Alcor for cryonic preservation. He lives in, and works from, a modern house overlooking the sea on the coast of the Isle of Skye, in the Highlands of Scotland.