There are three main human ideological groups in the debate over species dominance – the Cosmists, who want to build godlike artilects (artificial intellects), the Terrans, who are opposed to building artilects and the Cyborgists, who want to become artilects themselves by adding the necessary components to their own brains. Whether the third vision can prevail, negating the possibility of a devastating war between the first two factions over humanity’s future is uncertain, but well worth a look.
The most murderous ideology in history up to now (see below) has been Communism. The Russian Communist Party killed about 60 million people, mostly under General Secretary Joseph Stalin, one of the greatest tyrants in history. The Chinese Communist Party killed about 80 million people, mostly under Mao Zedong, modern China’s founder and the greatest tyrant in history.
These parties felt they had the moral right to exterminate their enemies because they considered the latter to be utterly evil, and hence devoid of the right to live. They saw their enemies as exploiters, as thieves who siphoned off the “surplus value” of the labor of the proletariat. Translated from Marxist ideological terms into ordinary English, it means that a worker earned his wage by working for a certain number of hours per day. The extra hours he worked went to his employer, who was thus exploiting him, stealing his labor power. Communist ideology emphasized this theft, fueling a powerful hatred of the early capitalists, who did indeed exploit their workers and in many cases became very rich as a result.
The capitalists were a small minority, so Communist ideology favored the idea of exterminating them for the sake of the vast proletarian majority. But, when you start slaughtering millions of people, you can only do this in a highly totalitarian state. Mass murder and totalitarianism then generate new hatreds against state-directed repression, creating further enemies who need to be killed, hence the Communists’ large numbers of victims.
I said up to now above because it is quite possible that an even more murderous ideology is on the rise that may kill billions rather than just tens of millions of people. That would be Cosmism, the ideology in favor of humanity building godlike artilects later this century, resulting in death on an unprecedented scale.
The Cosmists will push very hard for the creation of godlike artilects, with mental capacities trillions of trillions of times above the human level according to the possibilities allowed by the physics of computation.
To the Terrans, these artilects would represent a profound existential threat to the human species. to such an extent that when push comes to shove, the Terrans will be prepared to exterminate the Cosmists for the sake of the survival of the human species. From the Terran viewpoint, wiping out a few tens of millions of Cosmists is the lesser evil, compared to allowing the Cosmists to build their artilects, which might look upon human beings as such inferior beings that they wipe us out as pests. There would always be that risk – one that the Terran politicians would simply not tolerate.
However, the Cosmists would be prepared for a Terran first strike against them, and with 21st century weapons, the scale of the killing in an Artilect War would reach gigadeath levels – in the billions.
The Cyborg Scenario
The above scenario is mine. Let us call it the “Artilect War scenario.” It is obviously horrific, so not surprisingly, a lot of people have tried to find far less catastrophic alternative scenarios. The main alternative scenario, as advocated by such people as Ray Kurzweil and Kevin Warwick, is explained below.
There will be a lot of people who would like to become artilect gods by adding progressively artilectual components to their own brains, thus creating a continuous transition from humanness to artilectuality. If most of humanity decides to make this transition, then a gigadeath-scale war could be avoided, since there would be no Terrans or Cosmists. Nearly everyone would be Cyborgists, converting themselves into cyborgs.
In other words, the cyborg scenario simply avoids the problem of species dominance by going around it. A bitter confrontation between Terrans and Cosmists can be avoided by suggesting simply that there will be no Terrans and Cosmists. Everyone (or nearly everyone) will have converted themselves into cyborgs. Hence there is no Artilect War, and hence no gigadeath.
Kurzweil and Warwick also add that if a small number of Terrans do decide to fight the cyborgs, the latter would be so much more intelligent than the Terrans, that (to use Kurzweil’s colorful phrase), “It would be like the Amish fighting the US Army.” For those not familiar with the Amish, they are a religious sect in the US whose doctrines forbid them from using technology more modern than that of the 19th century. They travel by horse and buggy, refusing to use modern methods of communication such as phones and the internet. In other words, the Terrans would feel so outclassed by the advancing cyborgs that they would very probably abandon any hope of defeating their hugely more intelligent enemies.
Weighing the Two Scenarios
I am very conscious that there is a lot at stake regarding which of the above two scenarios is likely to be more correct. If the first (the Artilect War) scenario is more probable, then I’m glad I will probably not live to see this horror. If the second (cyborg) scenario is more probable, then humanity can escape gigadeath. Thus, from a human perspective, the cyborg scenario is preferable. Instead of billions of human beings being killed, they become gods instead.
It’s sobering to reflect on the idea that individuals, tapping away on their laptops, can dream up scenarios that may sound like science fiction to most people at the time of writing, but may very well end up becoming true, indirectly killing billions of people. Actually, it’s terrifying. There are times, when I shudder at the prospect, when I put myself in that role.
I wonder if philosophers like Jean-Jacques Rousseau or Karl Marx had any conception of the future wars their ideas would generate, and the tens of millions of people who would die as a result. These “armchair philosophers” have great power, ruling the minds of the politicians whom they motivate to change the world. This makes the Rousseaus and Marxs of the world far more powerful people than the politicians like Thomas Jefferson, Franklin Roosevelt, Vladimir Lenin or Mao. The former create the ideas, the latter follow.
The species dominance debate places an enormous amount of intellectual responsibility on the shoulders of ideologists. However, it’s important to press on and not be crushed by the enormity of what’s at stake. It’s better to be realistic than optimistic, when faced with a choice between the two. We needs to think realistically about which of the above two scenarios is more likely to actually happen in the future.
Before attempting to weigh the plausibility of each scenarios, let’s spell them out in a bit more detail. This will allow us to make a more accurate comparison.
How might the cyborg scenario unfold? One can imagine a kind of “cyborgian creep,” whereby people add components to their brains in incremental steps, at such a pace that humanity has enough time to adjust and to accommodate these changes. If the benefits of cyborgization are considerable and hence very popular, then one can imagine that the changes will be widespread. Nearly everyone will want to be modified.
A bit later, the next major set of innovations are discovered, allowing the already-modified humans to update themselves again, in a process that continues indefinitely. Considering that there is potentially more (nanoteched) computing capacity in a grain of sand than a human brain by a factor of a quintillion (a million trillion), fairly soon the cyborgs are no longer human. The human portion will have been effectively drowned by the artilectual capacities of the machine portion. Effectively, these cyborgs will have become artilect gods.
How likely is the above scenario? It’s the favorite of Kurzweil, Warwick and many others.
Think about it. How incredible would it be to exceed the memory storage capacity of an unmodified human brain? If you could increase IQ by 10 points, or 50 or 100, wouldn’t you want to do that? Wouldn’t nearly everyone? The stragglers, under pressure from superior competition, would likely follow suit, arguing, “If you can’t beat’em, join’em.” Since they would be surrounded by millions of other people (if that’s still the appropriate term), doing the same thing, then “cyborging” will acquire the status of being normal. Hence, huge numbers of people will move down the cyborgian route. As Kurzweil puts it, “We (humans) will merge with our machines.”
Kurzweil paints a very rosy, optimistic picture of this process, as humanity enhances its capabilities. Likely this is because his raison d’etre is to invent machines that help humanity through devices, like his handheld gadget that can read and speak text for the blind. Kurzweil gives the impression of being genetically optimistic.
On the other hand, there are people like me, non-Americans, who have lived in the Old World and lack that American optimism. For us, such optimism is often a source of cynicism. We feel we know better, from firsthand experience, about the negative side of human nature.
For example, Europeans endured the Second World War on their own territory. The Chinese lived through Mao’s horrors even more recently. Americans, on the other hand, have to go back a century and a half before they come across a major catastrophe on their territory, namely the US Civil War. But even that was a relatively minor affair, killing “only” half a million soldiers and confining itself to a few states. Roughly contemporaneously in China, 20 million died during the Taiping Rebellion.
I notice a cultural correlation on the level of pessimism regarding the final outcome of the species dominance issue. Americans are more optimistic than Old Worlders. We are more cynical, viewing the American attitude as rather childlike and naïve. Old Worlders feel they know better, because they have had centuries more experience of how humanity can hurt itself.
How then might the proponents of the Artilect War scenario criticize the Cyborg scenario?
We start with the initial few additions of artilectual components to people’s brains. How will this change things? Common sense says that the variety of “quasi-humans” will then increase. There will be many companies offering such additions, so it is to be expected that some humans will want a lot of change, some less, some not at all. Humanity will thus lose its uniformity, and this “cyborgian divergence” will generate many problems, such as mutual alienation and distrust.
At about the same time, nanotech will be coming into its own. The computational capacity of nanoteched matter is huge, much larger than the human brain, as stated above. When quantum computing comes, the superiority factor will even greater. Thus, fairly quickly, the cyborgs’ behaviour patterns will become quite different from those of traditional humans, alarming unmodified humans.
There are two examples I usually use to illustrate this fear. The first is that of a young mother who cyborgs her newborn baby with “the grain of nanoteched sand,” thus converting it into “an artilect in disguise” and in a manner of speaking, “killing her baby,” because it is no longer human. It is effectively an artilect, with a human form. Its behavior will be utterly, utterly alien. This will cause the mother deep distress, once she realizes what she has done; she has lost her baby.
Another example is when older parents watch their adult children “go cyborg.” Their children will then move away from being human to being something else, something their parents are totally unable to relate to. The parents will feel that they have lost their children, causing them enormous stress and bitterness.
The above examples are just scratching the surface. As cyborgification continues, many problems will arise. As humanity is progrssively undermined, a lot of people, some very powerful, will take fright and sound the alarm.
These people I labeled “Terrans,” based on the word “Terra” (the Earth), because that is their perspective. They will want to see human beings remain the dominant species on our home planet. Opposing them will be the “Cosmists,” derived from “Cosmos,” who want to build artilect gods which will then presumably move out into the cosmos, in search perhaps of even more advanced artilects from other, more ancient civilizations.
The Terrans will become frightened by the cyborgs all around them, and will probably read the writing on the wall hinting at their own demise. This will evoke a visceral rejection of the cyborgs’ alien nature and the growing capacities of the latter.
Human are probably genetically programmed to be fearful of overt genetic difference. Physical anthropologists tell us that there was a time not too many hundreds of thousands of years ago when there were several humanoid species coexisting. It is likely that they were in conflict with each other and learned to fear each other. Some anthropologists think that it was Homo sapiens who wiped out the Neanderthals about 30,000 years ago.
If humans are genetically programmed to fear minor genetic differences such as eye shape and skin color, how much more fearful will Terrans be of cyborgs, who may look the same as humans but behave very differently?
As the cyborg population diverges and profoundly disturbs humanity’s traditional status quo, the Terrans will probably feel motivated to stop the process while it is not too late, meaning while they still have the mental abilities to stop it. If they wait too late, they will be unable to match the intellectual power of the cyborgs and artilects, becoming effectively outcompeted.
The Terrans will organize politically, before going on the greatest witch-hunt humanity has ever known. They will go to war against the Cosmists, the Cyborgists, the artilects and the cyborgs. They will aim to keep human beings as the dominant species, because if they sit around and do nothing, fairly soon, the cyborgs and artilects will be indistinguishable from each other and utterly dominant. The fate of the Terrans will then lie in the hands of their superiors.
Which of the above two scenarios do you consider to be more realistic, the optimistic Kurzweilian “cyborg scenario” or the deGarisian “Artilect War scenario”? There appear to be elements of plausibility to both scenarios, so how to weigh their respective likelihoods is an open question.
In my view, this issue will divide humanity profoundly. We already have some evidence of this from surveys. which show that humanity seems to split right down the middle. About half feel that humanity should build artilects or become cyborgs (virtually the same thing from the Terran viewpoint) and the other half are terrified of such developments.
This makes it very important, as awareness of the species dominance issue increases, to perform regular opinion polls on the issue to see just how divisive it is.
Once a sizable proportion of humanity is irredeemably opposed to the rise of the artilect/cyborg, then we have the makings of a major war, an “Artilect War.” The Terrans will be fighting to preserve the human species. The Cosmists will be fighting to build gods. The Cyborgists will ally with the Cosmists to become artilect gods themselves.
What about the timing factor? For example, if the cyborgs and artilects advance faster than the Terrans organize, then it might happen that the artilects/cyborgs come into existence before the Terrans can wipe them out. With their greater intelligence levels, they will easily be able to overcome the Terrans.
The Terrans, however, will be painfully aware of this in the early days of the scenario and will plan for it. They will strike first, while they still have a chance of winning. The Terrans will organize, politicize, and exterminate while they are still able.
The above is my personal view. I think my scenario is more realistic, more probable than the optimistic scenario of Kurzweil and Warwick, although I may be wrong; these things are difficult to judge in advance. I hope I am wrong, so that the artilects do come into being, and humanity is not wiped out, either by open warfare or at the hands of an exterminating artilect population.
But, I fear that the most probable scenario will in fact prove to be the worst, leading an Artilect War, the worst that humanity has ever known.
What is your opinion? Which way do you think future history will go?