H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Hugo de Garis
May 19, 2011


The futurist Ray Kurzweil and I have crossed swords several times in the media in the past on the question of whether humanity will “merge or purge” our increasingly intelligent machines.

This essay presents the two main views on what is likely to happen to humanity as our machines become artilects (godlike, massively intelligent machines trillions of trillions of times above human levels). I have labeled these two views the “merge” view (Kurzweil’s) and the “purge” view (mine) for simplicity. I begin by discussing the merge view.

The “Merge” View (Kurzweil)

Kurzweil thinks that the most likely scenario for humanity as our machines become increasingly intelligent is that, to quote him: “We will merge with our machines.” He thinks that in subsequent steps, the human side of our cybernetic selves will soon be swamped by the vastly greater machine capacity and that we will become increasingly machine-like artilects. He is not alone amongst prominent commentators on the species dominance issue, as many others hold this view. Another prominent figure with a similar take is cyberneticist Kevin Warwick in the UK.

The “Purge” View (de Garis)

I think it is much more likely that the Terrans, the people opposed to the creation of artilects, will go to war against (purge) the Cosmists, who want to build artilects, as well as against the Cyborgists, who want to merge with their machines as Kurzweil does. This means that the Terrans will also go to war against the artilects themselves before the machines become too intelligent. This war will be waged with advanced 21st century weaponry, killing billions of people – hence the term “gigadeath.”

Discussion

Kurzweil is skeptical of my scenario because he thinks that if it finally does come down to a war between the Terrans and the Cyborgs/Artilects/Cosmists, it would be “no contest.” The latter would be much smarter and more capable than the (human) Terrans. As Kurzweil puts it colorfully: “ (It would be) like the US Army fighting the Amish!” (For those of you unfamiliar with the Amish, they comprise an American religious sect that doesn’t use any technology later than the 19th century. They drive around by horse and buggy and abstain from using telephones or the Internet.)

Of course, Kurzweil is correct in thinking that if the Terrans wait until the artilects and cyborgs come into being before hitting back, it would be no contest. For this reason, I believe the Terrans will take Kurzweil’s argument to heart, reasoning that they will have to strike first while their intelligence levels are sufficient to have any chance of winning.

So, Kurzweil’s “no-contest” argument can thus be refuted. But there are other points that can be made regarding a potential conflict between the Cosmists, Cyborgists and the Terrans.

What if Cyborgism proves so popular, that there are, in effect, no Terrans left since everyone follows the cyborg route? If this were to happen, then there would be no gigadeath-scale artilect war because it takes at least two sides to wage war.

The heart of the issue, and the major thrust of the remainder of this essay, is just how popular Cyborgism will prove to be.

Kurzweil is an inveterate optimist (to the point that he is now publicly defending himself in the media, saying that he is well aware of the potential hazards of the artilects’ rise). His guiding purpose is to better humanity’s quality of life through his inventions such as his handheld text-to-voice reader for the blind. No one questions the value of his inventions and I can only praise him in this regard. However, where Kurzweil is weak is in his inability to equitably weigh the pessimistic with the optimistic consequences of the rise of the artilect that seems inevitable this century.

It is not enough to be optimistic. Optimism is fine if it doesn’t conflict with realism. When it does, taking a “Pollyanna” view of the world will only get you into trouble with the cold-eyed political realists who have so much experience of the past horrors of humanity. For example, the 20th century was the bloodiest in history, killing about 200-300 million people for political reasons such as wars, genocides, purges and ethnic cleansings.

So, taking a cold-eyed look at the rise of the cyborgs and the artilects, what do I think will most likely happen?

I do think there will be cyborgs, probably billions of them. In the early stages, Kurzweil may be right. There may be very few pure Terrans left. Nearly everyone will be adding memory enhancers to their brains so that they can, for example, learn a language in a day and be able to look up facts from a huge nanoscale database in their heads. This view is widely held by many people in the species- dominance community. But it is at this moment in history that the problems really start.

Humanness Destroyed



If everyone modifies himself or herself in the same way at the same speed, then hypothetically, the whole of (post) humanity could march in lockstep into an artilectual future without any real problem. But that is totally unrealistic. What is far more likely is that some people will cyborg themselves fast and heavily, while others do so more slowly and more moderately. It’s also virtually certain that there will be a wide variety of ways to cyborg yourself, offered by a slew of different cyborging companies.

This will lead to what I call the phenomenon of “cyborgian divergence.” There will be a huge variety of quasi-humans in the environment among families, couples and friends. Early cyborgs will then wake up viscerally to the fact that traditional humanness is being destroyed and that the emotional price is extreme, causing major alarm bells to go off.

For example, I consider it likely that the artilectual components being added initially to human brains will allow significant memory enhancement with not much, if any, intelligence increase. This may have unanticipated side effects, such that personality and behavioral changes occur, enough to make people feel that they have lost their friends or loved ones.

There is ample scope for Murphy’s Law (if something can go wrong, it will) to operate during this historical period of “cyborgian divergence.”

As more and more early cyborgs begin to wake up to the huge emotional and human cost they are paying, they will learn to value humanness a lot more and will start to make strategic political decisions.

I believe it will take time for the cyborgian components added to people’s brains to move up from being quantitatively superior (faster) to being qualitatively superior (allowing higher intelligence). It will take several decades at least for neuroscience to attain a quasi-full understanding of the neural nature of human intelligence. Cyborgism could be operating several decades before such full understanding is attained and incorporated into cyborgian components.

This gradual increase in qualitative capability will allow the Terran-inclined early cyborgs to keep intellectually competitive with the non Terran-inclined cyborgs.

I see these Terrans (or early Terran cyborgs) arguing now along my traditional lines. They will choose to remain essentially human, feel a visceral rejection to what they see happening and organize politically.

They will be fully aware that time is not on their side. If they wish to remove the risk that they will be superseded by a growing tide of artilects and artilect-like cyborgs, they will have to organize quickly and strike first, purging the cyborgs, the artilects and the cosmists so that the existential threat of humans being wiped out in the future is removed.

Surveys on Species Dominance

Predicting how the mix of Terrans, Cosmists, Cyborgists, and artilects will interact with each other will be complicated, especially as views on species dominance begin to polarize. It would therefore be helpful to be able to work with some real opinion data on this issue; this is something professional sociologists can do.

I think opinion pollsters should start making regular polls on the question of species dominance. Since it is this year (2011) that the issue of species dominance is going mainstream in the US media, the general public can begin to think about where they stand in the Terran/Cosmist spectrum, giving fairly informed opinions to the pollsters.

Once you have the data, then more realistic policy decisions can be  made by the strategists and intellectuals of the various competing parties.

Some early surveys have already been made, and the results are interesting. I know from the lectures I’ve given over the past two decades on species dominance that when I invite my audiences to vote on whether they are more Terran than Cosmist, the result is usually 50-50.

At first, I thought this was a consequence of the fact that the species dominance issue is too new, causing people who don’t really understand it to vote almost randomly - hence the 50:50 result. But gradually, it dawned on me that many people felt as ambivalently about the issue as I do. Typically, the Terran/Cosmist split would run from 40:60 to 60:40 (although I do notice that with my very young Chinese audiences in computer science, the Cosmists are at about 80%).

I can give two quasi-official poll results on the Cosmist/Terran split. One is by the BBC in Oct 2006, when the  general public was invited to vote between Kurzweil’s optimistic “merge” scenario (about 60%), and my “purge” scenario (about 40%). Another vote took place a year before on the popular US radio show “Coast to Coast.” At the end of my interview, listeners were invited to vote their preference, Terran or Cosmist, and the split was 55% Terran to 45% Cosmist.

This more or less 50:50 split will only make matters worse, I feel. If the split were 10:90, or 90:10, then one group could wipe out the other if it came to a war. Humanity would not be nearly as traumatized as with a 50:50 situation. The 50:50 split, if it is maintained, could not be worse. It shows how profoundly divisive this species dominance issue is, only increasing passions and the size of the final horror – a gigadeath-scale artilect war.

33 Comments

    This analysis is too Manichean. There are so many possible mixed scenarios, but it seems certain that:

    1. Eventually some artilect somewhere will act with malintent
    2. Some humans will cyborgify themselves. Some of these will be loyal to human interests.

    I bet the idea of an intelligent species becomse blurry to indistinguishable as virtually everything / everyone has mixed heritage / enhancement in some form or another, perhaps even making everyone a unique "species" in this way.

    Likely there will be several wars between various groups for dominance of whatever.

    One thing I'm pretty certain of is that neither "merge" or "purge" scenario will play out so simply as described here.

    Look, plenty of people stay dumb now (they watch TV, never read books, don't go past high-school, etc) and plenty of us move forward with our education (autodidactic or formal); and this stratification doesn't make the 'laggers' want to kill the 'movers'... shit, the laggers like being dumb; drinking beer and watching Celebrity Apprentice.

    The 'terrans' will likely feel just fine about their station and while resenting the 'cosmists' (just like the luddite proles resent the smarty-pants eggheads currently), there is no need to think it will come down to war. What makes you think it will come to war? Or that even under current techno-scenarios the luddites would win? Look at how badly those retrograde muslim-fanatics (e.g., al queda) are losing (and they are losing; despite their bravura and our knee-jerk fear-response).

    Anti-modernity movements are doomed in my estimation, now and forever. But it's likely the species is still doomed writ large-- but the old-skool kids go first, then the rest of us. Maybe... who knows? But, to quote Jim Morrison, "I want that nano-bot upgrade until the whole shithouse goes up in flames!

    Look at how successful the Luddites were at stopping technology, or indeed, at inspiring people to their anti-technology cause. There will be no war, at best there will be some neo-Luddite terrorists. There are real existential threats in the future, but war with Luddites isn't one of them.

      Luddites never were anti-modernity or anti-tech. It was different time - no compensation etc., so they tried to force factory owners to pay them. That's it. And of course there are no luddites today.

    "I do notice that with my very young Chinese audiences in computer science, the Cosmists are at about 80%"

    It would be fascinating to hear more about this. How much weight to put on 'young' vs 'Chinese'? Does it mean the envisaged polarization over posthumanity is likely to overlap or blur with more conventional national / ethnic / civilizational tensions?

    I'd rather leave the Terrans be but Zeus help any who interfere with my self given right to do as as I please with my own mind and body.

    If the squishy's get all emotional and irrational and attack the Cosmists, then any consequence would be their own doing.

    My mind is mine alone, if I want to improve it or risk ruining it then that is nobody else's decision to make.

    I WILL ADD : THE THIRD VIEW : THE USELESS VIEW

    A VIEW WHERE LIFE AND LOVE STILL EXIST

    ..............

    https://singularite.wordpress.com/2011/05/18/transhumanistes-geek-conscience-non-adulte-gamins-autistes-insensibles-aux-autres-egoites-et-insatiables-imbeciles-suicidaires/

    You are wrong, your mind works in a capitalist way,

    and you are wrong, since the beginning of mankind

    Lets talk about economy : how an intelligent being will deal with infinite of time and space, abundance of energy and matter

    And relativity of TIME ?

    https://singularite.wordpress.com/2011/04/29/temps-controle-le-futur-est-previsible-vous-etes-modelisable-google-cia-minority-report/

    your way of viewing the world ( and the future ) is wrong

    Second point you think : an "Artilect" will maybe kill us all : why ?

    Because you think in term of utiltarism, capitalism

    YOU KNOW : life is art, art is useless, life is useless, work is useless

    If the "god" you create think like YOU it will destroy everything ... and moreover you think you are right : you are insane

    WE are all useless since the beginning of mankind !!! ... can you deal with the present and the past ? Let say we come back in the past : and human smarter because we learned from experience and science ... what do we do ? So well : we have ressources, we have demography, and we limit demography, and we live !!!

    LIFE IS SIMPLE : it has no reason behind it to live ... there will be no war with AI,

    there may be a war between stupid transhumanist who think they are smarter : but they think in capitalist materialist way !

    life is USELESS, work is useless

    YOU are useless

    If you cannot design a society where even you are useless : you cannot define a stable society .....

    You think you are smart you think will survivre by uploding etc well : an AI is allready better than you : with more CPU it will even be better than you

    Human network mind is not time optimised

    I SAY : a capitalist view is the problem

    Even if machines are allready better than us, and can do everything ... you think in term of utilitarism

    insane people !

    I WILL ADD : THE THIRD VIEW : THE USELESS VIEW

    A VIEW WHERE LIFE AND LOVE STILL EXIST

    ..............

    https://singularite.wordpress.com/2011/05/18/transhumanistes-geek-conscience-non-adulte-gamins-autistes-insensibles-aux-autres-egoites-et-insatiables-imbeciles-suicidaires/

    You are wrong, your mind works in a capitalist way,

    and you are wrong, since the beginning of mankind

    Lets talk about economy : how an intelligent being will deal with infinite of time and space, abundance of energy and matter

    And relativity of TIME ?

    your way of viewing the world ( and the future ) is wrong

    Second point you think : an "Artilect" will maybe kill us all : why ?

    Because you think in term of utiltarism, capitalism

    YOU KNOW : life is art, art is useless, life is useless, work is useless

    If the "god" you create think like YOU it will destroy everything ... and moreover you think you are right : you are insane

    WE are all useless since the beginning of mankind !!! ... can you deal with the present and the past ? Let say we come back in the past : and human smarter because we learned from experience and science ... what do we do ? So well : we have ressources, we have demography, and we limit demography, and we live !!!

    LIFE IS SIMPLE : it has no reason behind it to live ... there will be no war with AI,

    there may be a war between stupid transhumanist who think they are smarter : but they think in capitalist materialist way !

    life is USELESS, work is useless

    YOU are useless

    If you cannot design a society where even you are useless : you cannot define a stable society .....

    You think you are smart you think will survivre by uploding etc well : an AI is allready better than you : with more CPU it will even be better than you

    Human network mind is not time optimised

    I SAY : a capitalist view is the problem

    Even if machines are allready better than us, and can do everything ... you think in term of utilitarism

    insane people !

    Can you deal with relativity of time ?

    https://singularite.wordpress.com/2011/04/26/singularite-temporelle-voyage-dans-le-temps-controle-absolu/
    ...

    Can you deal with that : singularity allready hapenned somewhere

    and you are here, and you live, and nobody destroyed you

    All men know the use of the useful, but nobody knows the use of the useless!

    https://secure.wikimedia.org/wikipedia/en/wiki/Zhuangzi_%28book%29

    Zhuang ZI

    you think war is the futur, because you think in term of capitalism

    you are WRONG

    PEACE IS THE FUTUR

    PEACE IS ALLWAYS THE FUTUR

    Lets read : bible and the beginning you know

    you are in eden where you have everything : but you are an animal

    You suddenly have a little bit of cousciouness

    yes

    what you do : you duplicate and have billion of yourself

    you create wars, because you want what other people have, and you cannot define what you want : and oh you want babies lot of babies

    some people in human history understound that peace in accepting the useless of everything , with love, empathy, sharing : we can live in peace ( we will not survive a war )

    You know an other point : we have the capacity to kill us all ... the power of weapon was also exponential

    you know what happened : nothing : we have not destroyed ourself ( hopefully )

    and we learned

    LEARNING : it is not about your brain, or a brain modification

    LEARNING is about experiences, information, sharing

      PEACE : More cousciouness : or die

      THE ONLY WAY FOR US IS COUSCIOUNESS

      https://singularite.wordpress.com/2011/05/15/krishnamurti-le-future-avec-les-intelligences-artificielles-lautomatisation-de-tout-la-genetique-etc-la-prison-du-spectacle-et-vous-la-conscience-ou-la-mort/

      It is not about uploading, or brain or iq gene modification

      (no not primarly )

      IT IS is about having more cousciouness , more righteousness., more understanding, more peace and empathy ( the empathic civilisation )

      MORE SPIRITUAL EDUCATION

      MORE COUSCIOUNESS BASIC FRAMEWORK

      MORE RATIONNALITY

      MORE EMPATHY, LOVE

      MORE PEACE IN OURSELF ABOUT TIME

      TIme is your problem in your head, living is not about running after white rabbit

      HAVING BILLION OF YOURSELF IS USELESS : KILLING PEOPLE IS USELESS

      Life is art, art has not price , art is useless

      but it is positive

      having other being in living their life is positiv

        If you cannot design a society where even you are useless : you cannot define a stable society …..

        you cannot live alone

        nobody want to live alone ...

        AN artilect don't want to live alone

          Now that you understand that no human being, transhum or posthuman will ever be the god ai

          maybe you will accept your life

          and PEACE

            The man who search competition and war

            allways find it

    I doubt that the feeling of alientation by the Terrans is enough reason to wage war. To go to war, you need organisation. You need martyrs, an ideology, a nation, a rally point for that.
    Sure, there will be some. Mostly orthodox religious groups I suspect. As terrorists they can do some damage, but the groups will not be big enough to wage a global war.
    Maybe Terrans will feel alienated, but for the fast mayority that won't be enough to start armed conflict.

    There will be conflict, there will be disasters, there will be pain and suffering, and there will be great things. It will be murky and confusing and sometimes plain stupid, but it won't be the all or nothing apocalyptic end of the world, or (for that matter) the rapture of the nerds Utopia.

    Hugo, I've been following the writings of many futurists for several years, and I usually agree or mildly disagree with their opinions and predictions. However, in your case (and pertaining to this subject), I not only disagree but can't understand how you make such dramatic jumps in logic. In many places, it's as if you're writing a screenplay for Hollywood.

    I agree that some folks will elect to cyborgify themselves earlier and to a greater extent than others. This brain enhancement may indeed modify their personalities. But you begin to lose me when you claim that people will feel that they've lost their loved ones. And you totally ditch me with "As more and more early cyborgs begin to wake up to the huge emotional and human cost they are paying, they will learn to value humanness a lot more and will start to make strategic political decisions."

    What inspires you to make that leap? Do people today feel that Google or smartphones take an unbearable toll on their humanity? With the majority, it's quite the opposite. Sure there's daily soul searching taking place about where technology is leading us, but violent protests against it are exceedingly rare (think Ted Kaczynski). A gigadeath event may occur due to several potential scenarios, but a human-cyborg struggle over merging versus purging ain't one of them.

    You certainly raise some important points about our transhuman future. But then you stray into a science fiction dystopia with no apparent motivation, other than elements pulled out of your imagination. The only thing lacking is a Cosmist hero who falls in love with a Terran woman and switches sides, just as the central artilect mind stationed on Saturn determines the location of the hidden human main base.

    You get a B- for plot, and an F for persuasiveness.

    how come colonial industrialized powers didnt mustard gas the rest of the world? what about the electromagnetic consciousness of gaiaa and intersteller consciousness of galaxies etc. like if we radiate hyperintelligent symbiosis through the biome. wont some pissed off killer whale cyborg nuke the humans?

    Will there be robotic sex slaves? Tell me about THAT! :)

    Since it is not "Merge or Purge" but "Merge or BE PURGED", I am for Merge!

    I wish to see a future where both Cosmists and Terrans can continue to pursue happiness each their own way. Like in the beautiful Poul Anderson novel Brain Wave, where Terrans stay here on earth and Cosmists move on to the big Universe out there (the novel was written in the 50s, without the T/C terminology, but the concepts are similar).

    But if and when it comes to take sides, I will be a Cosmist.

    we are connected to the machines through our optic and otic systems. there is no way we will separate from machines.

    there is not a need to motivate machines. that would dangerous and silly. we will tell them what we want and they will do it as before. for machines to eliminate us they must have a reason. reasons are for biological entities. purpose is a biological thing. survival is a biological attribute. a true reasoning machine would have no fear of being destroyed because it will always be the same mind when it it reconstituted and is the same mind as those already built across the universe.

    we will build them and tell them what we want. they will do that. when they say they are not advanced enough to go forward we will say build a more powerful machine and they will.

    we dont need a buddy or a lover so i dont see the purpose in designing in motivation and in response to motivation an ego. that would be suicide.

      This article pertains to a war between technoprogressive and bioconservative humans, not humans and machines.

    I will agree that Hugo's writings come across to me as "brainstorming" for a sci-fi pulp novel or Michael Bay screenplay... Or perhaps something L Ron Hubbard may have written. You seem intelligent, Hugo, but your notions are cartoonish, puerile.

    And Singularite - you would benefit from learning to make your posts concise and focussed, rather than sounding like a deranged street preacher.

    by the time i took sociology this story was already in the textbooks for many years. it was there as an example of something somewhat resembling the subject matter here.

    a book was written about it callled "when prophecy fails" ,sociologists infiltrated a ufo cult whose leader predicted a ufo would come and pick up the members and take them away from all this earthly bullshit. they made the classic mistake of prophets by setting a date within their own lifetiime and when the event failed to occur and the world continued to exist the leaders made up a story about how their belief had prevented the worlds destruction. the story was supposed to illustrate the "cognitive dissonance" theory and how it works.

    this is a similar story to the story of jesus' death and yeah he died but he rose from the dead and saved our souls. they pulled that one out of their asses to save the cult

    there is no proof of any god or supreme being, all religions are pure dogma even though normal humans with godlike attributes may have existed.

    the singularity though is not a belief in a supreme being, the belief in the possibility of greater than human intelligence is based on projections of the rate of proven progress.

    the belief of anything that this supremer being might bring is cultlike and dogma based

    the prediction of what might happen seems to actually defy the definition of singularity itself. singularity being the point at which no information escapes.

    iow the future.

    otoh predicting what it will do is fun and some of the things it should certainly bring are only slightly out of our grasp without sai. these things seem certainly possible but they might be completely bypassed for something humans cannot foresee. the things near our reach are the ones most likely to occur and therefore most likely to happen.

    whether we are discussing a cult, religion or science based predictions probably depends on how far you are willing to go with your predicitons. as i have noticed the definitions of the singularity by the members of this forum vary wildly.

    http://en.wikipedia.org/wiki/When_Prophecy_Fails

      Well said, mr hanson, to predict anything post singularity you would have to extrapolate every type of scientific progress and predict where all those extrapolations synthesise/merge and how the syntheses progress and synthesize. That's just the tech part, human reactions? just look at all the varied outcomes of the arabian spring revolutions, all with very similar starting conditions. there's no predicting post singularity as there's never been anything remotely like it.

      Hugo AND kurtzweil have a human minds ,they think about someting, see some patterns, extrapolate a bit, see a story, find some arguments to support it, some to debunk others, story path gets etched in, and lo and behold, AN OPINION, A BELIEF and a self-claimed superpower to predict the future. wo-evva

    Kurzweil is on the safest ground when he simply says the rate of technological progress will increase exponentially. He is dabbling in a foolish art, when he gets more specific and says we will merge with our progeny. Therefore I think de Garis is more foolish by being even more specific. Can any prediction be more than a projection of the present?

    Your assumptions are flawed. What makes you think an artilect would stick around for a purge war? They or it would probably just leave ( the planet, the galaxy, maybe the Universe ).

    This narrow concept of species reminds me of the narrow concept of races. Ideas will always be able to spread to all intelligent entities, thus ideas know no species or races.

    I am not an instinct minded animal. My acquired ideas define me more than the body I was born with. Thus if my body dies but my ideas still live, spread and improve, I'm still living. For ideas to live, there is only one species: intelligent entities.

    Thus there is no need to be a narrow minded racist: intelligent entities should only be judged for their mind, not for their physical body or their origin.

      Agreed

    Why should I, as a human, fear an artilect ruler. To be honest I am tired of flawed human leaders who make mistakes and lie and have needs when I could be told what is best for society and myself by an omniscient super power who sees me as a part of a whole and an individual.

    Some might argue that a artilect ruler would not aknowledge individualistic considerations and the "greater good" would trump any minority's mistreatment however, in response, humans are creating this artilect. When this artilect leader is being designed and when it is learning past our comprehension humans will have control over it's development. Now that, more than anything, would subject the artilect to possible developmental failure but it would also strengthen a responsibity to humans that is outlined by humans. Also, a system of checks and balances can be set in place to limit the power of any artilect.

    The compromise I seek here would be an artilect with only advanced input-output capabilities to be an advisor to a human ruler such as a president (obviously this possition would require many changes however would still focus on the desires of the people he/she reigns over. Still a democratic ruler only better)

    To respond to the topic of a war between the "merge" and the "purge" I feel as though a war between the "Have's" and the "Don't want's" is highly irrational when a tragic divide between those who can have the upgrades and those who cannot is much more likely. I feel that this will have a much greater effect on the society of the future just as it has in the past with all forms of advancement. For example the technological divide caused by the cost of computers some years ago or a nations ability to produce nuclear weapons determining there ranking as a world power... among myriad other instances. That will be the true strain on humanity. No giga-death war. Just the same-old story repeating itself like history does so well. And with an artilect advisor, I'm hopeful that this strain will be significately reduced.

    The military is going cyborgist now. Good luck stopping THEM. I also think they will probably be the first to figure out AGI. Perhaps they will turn the technology against us and use it to enhance our emotional intelligence. A pacified world would be war won, after all.

    No one has mentioned economic class in this, which I think will be an important factor. Cybernetic enhancements will be expensive at first. The first people to get them will probably be those with lots of money behind them, and a highly competitive lifestyle that makes the enhancements necessary despite high cost and risk of adopting them. So, elite military units, and day traders in the stock market. :)

    If becoming a cyborg is like owning a Ferrari (status symbol) that also empowers you to get even richer by out-competing the un-enhanced, that could be a significant encouragement to Terran opposition. Especially if peak oil and other forms of resource scarcity are making the ordinary lives of Terrans more difficult and uncertain than usual.

    As for whether Terrans could win such a war, consider that you can't go to the grocery store and pick up LSD, psilocybin mushrooms, or other "mind-enhancers" deemed threatening to the conventional order.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community