The Cyborg Scenario – Solution or Problem?

There are three main human ideological groups in the debate over species dominance – the Cosmists, who want to build godlike artilects (artificial intellects), the Terrans, who are opposed to building artilects and the Cyborgists, who want to become artilects themselves by adding the necessary components to their own brains. Whether the third vision can prevail, negating the possibility of a devastating war between the first two factions over humanity’s future is uncertain, but well worth a look.


The most murderous ideology in history up to now (see below) has been Communism. The Russian Communist Party killed about 60 million people, mostly under General Secretary Joseph Stalin, one of the greatest tyrants in history. The Chinese Communist Party killed about 80 million people, mostly under Mao Zedong, modern China’s founder and the greatest tyrant in history.

These parties felt they had the moral right to exterminate their enemies because they considered the latter to be utterly evil, and hence devoid of the right to live. They saw their enemies as exploiters, as thieves who siphoned off the “surplus value” of the labor of the proletariat. Translated from Marxist ideological terms into ordinary English, it means that a worker earned his wage by working for a certain number of hours per day. The extra hours he worked went to his employer, who was thus exploiting him, stealing his labor power. Communist ideology emphasized this theft, fueling a powerful hatred of the early capitalists, who did indeed exploit their workers and in many cases became very rich as a result.

The capitalists were a small minority, so Communist ideology favored the idea of exterminating them for the sake of the vast proletarian majority. But, when you start slaughtering millions of people, you can only do this in a highly totalitarian state. Mass murder and totalitarianism then generate new hatreds against state-directed repression, creating further enemies who need to be killed, hence the Communists’ large numbers of victims.

I said up to now above because it is quite possible that an even more murderous ideology is on the rise that may kill billions rather than just tens of millions of people. That would be Cosmism, the ideology in favor of humanity building godlike artilects later this century, resulting in death on an unprecedented scale.

The Cosmists will push very hard for the creation of godlike artilects, with mental capacities trillions of trillions of times above the human level according to the possibilities allowed by the physics of computation.

To the Terrans, these artilects would represent a profound existential threat to the human species. to such an extent that when push comes to shove, the Terrans will be prepared to exterminate the Cosmists for the sake of the survival of the human species. From the Terran viewpoint, wiping out a few tens of millions of Cosmists is the lesser evil, compared to allowing the Cosmists to build their artilects, which might look upon human beings as such inferior beings that they wipe us out as pests. There would always be that risk – one that the Terran politicians would simply not tolerate.

However, the Cosmists would be prepared for a Terran first strike against them, and with 21st century weapons, the scale of the killing in an Artilect War would reach gigadeath levels – in the billions.

The Cyborg Scenario

The above scenario is mine. Let us call it the “Artilect War scenario.” It is obviously horrific, so not surprisingly, a lot of people have tried to find far less catastrophic alternative scenarios. The main alternative scenario, as advocated by such people as Ray Kurzweil and Kevin Warwick, is explained below.

There will be a lot of people who would like to become artilect gods by adding progressively artilectual components to their own brains, thus creating a continuous transition from humanness to artilectuality. If most of humanity decides to make this transition, then a gigadeath-scale war could be avoided, since there would be no Terrans or Cosmists. Nearly everyone would be Cyborgists, converting themselves into cyborgs.

In other words, the cyborg scenario simply avoids the problem of species dominance by going around it. A bitter confrontation between Terrans and Cosmists can be avoided by suggesting simply that there will be no Terrans and Cosmists. Everyone (or nearly everyone) will have converted themselves into cyborgs. Hence there is no Artilect War, and hence no gigadeath.

Kurzweil and Warwick also add that if a small number of Terrans do decide to fight the cyborgs, the latter would be so much more intelligent than the Terrans, that (to use Kurzweil’s colorful phrase), “It would be like the Amish fighting the US Army.” For those not familiar with the Amish, they are a religious sect in the US whose doctrines forbid them from using technology more modern than that of the 19th century. They travel by horse and buggy, refusing to use modern methods of communication such as phones and the internet. In other words, the Terrans would feel so outclassed by the advancing cyborgs that they would very probably abandon any hope of defeating their hugely more intelligent enemies.

Weighing the Two Scenarios

I am very conscious that there is a lot at stake regarding which of the above two scenarios is likely to be more correct. If the first (the Artilect War) scenario is more probable, then I’m glad I will probably not live to see this horror. If the second (cyborg) scenario is more probable, then humanity can escape gigadeath. Thus, from a human perspective, the cyborg scenario is preferable. Instead of billions of human beings being killed, they become gods instead.

It’s sobering to reflect on the idea that individuals, tapping away on their laptops, can dream up scenarios that may sound like science fiction to most people at the time of writing, but may very well end up becoming true, indirectly killing billions of people. Actually, it’s terrifying. There are times, when I shudder at the prospect, when I put myself in that role.

I wonder if philosophers like Jean-Jacques Rousseau or Karl Marx had any conception of the future wars their ideas would generate, and the tens of millions of people who would die as a result. These “armchair philosophers” have great power, ruling the minds of the politicians whom they motivate to change the world. This makes the Rousseaus and Marxs of the world far more powerful people than the politicians like Thomas Jefferson, Franklin Roosevelt, Vladimir Lenin or Mao. The former create the ideas, the latter follow.

The species dominance debate places an enormous amount of intellectual responsibility on the shoulders of ideologists. However, it’s important to press on and not be crushed by the enormity of what’s at stake. It’s better to be realistic than optimistic, when faced with a choice between the two. We needs to think realistically about which of the above two scenarios is more likely to actually happen in the future.

Before attempting to weigh the plausibility of each scenarios, let’s spell them out in a bit more detail. This will allow us to make a more accurate comparison.

How might the cyborg scenario unfold? One can imagine a kind of “cyborgian creep,” whereby people add components to their brains in incremental steps, at such a pace that humanity has enough time to adjust and to accommodate these changes. If the benefits of cyborgization are considerable and hence very popular, then one can imagine that the changes will be widespread. Nearly everyone will want to be modified.

A bit later, the next major set of innovations are discovered, allowing the already-modified humans to update themselves again, in a process that continues indefinitely. Considering that there is potentially more (nanoteched) computing capacity in a grain of sand than a human brain by a factor of a quintillion (a million trillion), fairly soon the cyborgs are no longer human. The human portion will have been effectively drowned by the artilectual capacities of the machine portion. Effectively, these cyborgs will have become artilect gods.

How likely is the above scenario? It’s the favorite of Kurzweil, Warwick and many others.

Think about it. How incredible would it be to exceed the memory storage capacity of an unmodified human brain? If you could increase IQ by 10 points, or 50 or 100, wouldn’t you want to do that? Wouldn’t nearly everyone? The stragglers, under pressure from superior competition, would likely follow suit, arguing, “If you can’t beat’em, join’em.” Since they would be surrounded by millions of other people (if that’s still the appropriate term), doing the same thing, then “cyborging” will acquire the status of being normal. Hence, huge numbers of people will move down the cyborgian route. As Kurzweil puts it, “We (humans) will merge with our machines.”

Kurzweil paints a very rosy, optimistic picture of this process, as humanity enhances its capabilities. Likely this is because his raison d’etre is to invent machines that help humanity through devices, like his handheld gadget that can read and speak text for the blind. Kurzweil gives the impression of being genetically optimistic.

On the other hand, there are people like me, non-Americans, who have lived in the Old World and lack that American optimism. For us, such optimism is often a source of cynicism. We feel we know better, from firsthand experience, about the negative side of human nature.

For example, Europeans endured the Second World War on their own territory. The Chinese lived through Mao’s horrors even more recently. Americans, on the other hand, have to go back a century and a half before they come across a major catastrophe on their territory, namely the US Civil War. But even that was a relatively minor affair, killing “only” half a million soldiers and confining itself to a few states. Roughly contemporaneously in China, 20 million died during the Taiping Rebellion.

I notice a cultural correlation on the level of pessimism regarding the final outcome of the species dominance issue. Americans are more optimistic than Old Worlders. We are more cynical, viewing the American attitude as rather childlike and naïve. Old Worlders feel they know better, because they have had centuries more experience of how humanity can hurt itself.

How then might the proponents of the Artilect War scenario criticize the Cyborg scenario?

We start with the initial few additions of artilectual components to people’s brains. How will this change things? Common sense says that the variety of “quasi-humans” will then increase. There will be many companies offering such additions, so it is to be expected that some humans will want a lot of change, some less, some not at all. Humanity will thus lose its uniformity, and this “cyborgian divergence” will generate many problems, such as mutual alienation and distrust.

At about the same time, nanotech will be coming into its own. The computational capacity of nanoteched matter is huge, much larger than the human brain, as stated above. When quantum computing comes, the superiority factor will even greater. Thus, fairly quickly, the cyborgs’ behaviour patterns will become quite different from those of traditional humans, alarming unmodified humans.

There are two examples I usually use to illustrate this fear. The first is that of a young mother who cyborgs her newborn baby with “the grain of nanoteched sand,” thus converting it into “an artilect in disguise” and in a manner of speaking, “killing her baby,” because it is no longer human. It is effectively an artilect, with a human form. Its behavior will be utterly, utterly alien. This will cause the mother deep distress, once she realizes what she has done; she has lost her baby.

Another example is when older parents watch their adult children “go cyborg.” Their children will then move away from being human to being something else, something their parents are totally unable to relate to. The parents will feel that they have lost their children, causing them enormous stress and bitterness.

The above examples are just scratching the surface. As cyborgification continues, many problems will arise. As humanity is progrssively undermined, a lot of people, some very powerful, will take fright and sound the alarm.

These people I labeled “Terrans,” based on the word “Terra” (the Earth), because that is their perspective. They will want to see human beings remain the dominant species on our home planet. Opposing them will be the “Cosmists,” derived from “Cosmos,” who want to build artilect gods which will then presumably move out into the cosmos, in search perhaps of even more advanced artilects from other, more ancient civilizations.

The Terrans will become frightened by the cyborgs all around them, and will probably read the writing on the wall hinting at their own demise. This will evoke a visceral rejection of the cyborgs’ alien nature and the growing capacities of the latter.

Human are probably genetically programmed to be fearful of overt genetic difference. Physical anthropologists tell us that there was a time not too many hundreds of thousands of years ago when there were several humanoid species coexisting. It is likely that they were in conflict with each other and learned to fear each other. Some anthropologists think that it was Homo sapiens who wiped out the Neanderthals about 30,000 years ago.

If humans are genetically programmed to fear minor genetic differences such as eye shape and skin color, how much more fearful will Terrans be of cyborgs, who may look the same as humans but behave very differently?

As the cyborg population diverges and profoundly disturbs humanity’s traditional status quo, the Terrans will probably feel motivated to stop the process while it is not too late, meaning while they still have the mental abilities to stop it. If they wait too late, they will be unable to match the intellectual power of the cyborgs and artilects, becoming effectively outcompeted.

The Terrans will organize politically, before going on the greatest witch-hunt humanity has ever known. They will go to war against the Cosmists, the Cyborgists, the artilects and the cyborgs. They will aim to keep human beings as the dominant species, because if they sit around and do nothing, fairly soon, the cyborgs and artilects will be indistinguishable from each other and utterly dominant. The fate of the Terrans will then lie in the hands of their superiors.

Choosing Sides

Which of the above two scenarios do you consider to be more realistic, the optimistic Kurzweilian “cyborg scenario” or the deGarisian “Artilect War scenario”? There appear to be elements of plausibility to both scenarios, so how to weigh their respective likelihoods is an open question.

In my view, this issue will divide humanity profoundly. We already have some evidence of this from surveys. which show that humanity seems to split right down the middle. About half feel that humanity should build artilects or become cyborgs (virtually the same thing from the Terran viewpoint) and the other half are terrified of such developments.

This makes it very important, as awareness of the species dominance issue increases, to perform regular opinion polls on the issue to see just how divisive it is.

Once a sizable proportion of humanity is irredeemably opposed to the rise of the artilect/cyborg, then we have the makings of a major war, an “Artilect War.” The Terrans will be fighting to preserve the human species. The Cosmists will be fighting to build gods. The Cyborgists will ally with the Cosmists to become artilect gods themselves.

What about the timing factor? For example, if the cyborgs and artilects advance faster than the Terrans organize, then it might happen that the artilects/cyborgs come into existence before the Terrans can wipe them out. With their greater intelligence levels, they will easily be able to overcome the Terrans.

The Terrans, however, will be painfully aware of this in the early days of the scenario and will plan for it. They will strike first, while they still have a chance of winning. The Terrans will organize, politicize, and exterminate while they are still able.

The above is my personal view. I think my scenario is more realistic, more probable than the optimistic scenario of Kurzweil and Warwick, although I may be wrong; these things are difficult to judge in advance. I hope I am wrong, so that the artilects do come into being, and humanity is not wiped out, either by open warfare or at the hands of an exterminating artilect population.

But, I fear that the most probable scenario will in fact prove to be the worst, leading an Artilect War, the worst that humanity has ever known.

What is your opinion? Which way do you think future history will go?


  1. There have been several times in history where groups with massive disadvantages have inflicted crippling damage on far more powerful enemies. Assymetric warfare is only likely to get more bloody and violent, and you don’t need godlike intelligence to set off nuclear weapons and suicide bombs. Electronic warfare becomes a tad more useful when technology is actually incorporated into your enemies.

    As for the vision of the future, *if* progress towards transhumanism isn’t ruined by civilisation collapse, then inequality will continue to grow. These technologies are going to be massively expensive in terms of resources, especially at first and perhaps always. A few rich and connected people will get them first and that could well be used by terrans/religious groups/bioconservative groups to begin a witch-hunt early.

  2. I don’t think it’s very productive to polarize a debate of things and events that haven’t happened.

    Especially, if this risks demonizing AGI researchers like yourself. Did you consider that you are putting yourself and us at risk? Did you consider how ignorant spooks, politicians and soldiers will view what you wrote?

    Most likely, there will be all sorts of points of view, but when the benefits of AGI technology are seen, people will be grateful that they ever had it.

    And no, I don’t think that the debate will be centered on trans-sapient autonomous entities at all. I think people will eventually appreciate that it is not such a good idea to prematurely build such beings.

    My research program is explicitly “cyborgist”, that’s exactly how I intend for trans-sapient AI to be used. To build ships to the stars, and not need anyone else to do it 🙂 It will be an extension, not a replacement. I don’t want to build an AI with its own motivations, actually. Neither do I have illusions of becoming a god, though I wouldn’t object to it 😀

    I believe that the productive way is to explore the possible beneficial future applications of AGI rather than considering doomsday scenarios.

    So why don’t we talk about how we transhumanists can use AGI to solve the world’s problems and achieve the impossible?

  3. I hate to use the word ‘destiny’, so I won’t, but I really do feel that we are set upon a trajectory from which their is no escape. What we believe as individuals is secondary to what we do as a civilization. Thinking about chaos, the individual particle of oxygen passing through a jet engine has no impact on the thermodynamic performance of a jet plane in mid-flight. We are just particles acting chaotically in a particle system that is, by some elusive property unto itself, moving in a definable pattern.

    Anyway, I don’t fear the rise of the artilects; I’m going to upload my consciousness into a full-immersion virtual reality environment long before then. I’ll take the blue pill; thank you very much. You all can choke down the red pill.


    • You make a lot of assumptions about what a non-human intelligence will allow you to do with its capabilities. Why should it allow you to upload?

  4. Also it would be incredibly awesome if the site could support video responses

  5. I don’t see why going cyborg couldn’t end up being a rite of passage into adulthood? And also I have to say it’s very frustrating to me, when I hear that if one becomes a cyborg they would be surrendering their humanity, why is that so certain? Who’s to say that people wouldn’t retain the fact that their mental identity is that of a human. Is a person with a prosthetic limb less human? I think if one were to answer yes to that question they would be in danger of being a very shallow personality, but that would only be my opinion. And please stop saying that if i became a cybernetic human that i would ever want to wipe out organics. That is hell of a fucking jump.

  6. I’m not going to go into the fifty page explanation of why I view the cyborg path to be the one that is inevitable, because those of you who know me have heard it often enough.

    But I will say that I am happy you’ve finally addressed a possibility OTHER than the artilect war one, Hugo.

  7. do the amish try to incite a war with us now. No. Why would this negligible minority of the future amish community be any more antagonistic. We would peacefully coexist and eventually they probably would come around anyways. Upgrading with what we see as transhumanist technologies will be so gradual it will seem as routine and trendy as getting an iphone is today. In a sense we are already cyborgs. Transhumanism has already won

  8. Fascinating. How do we know you are not a Cosmist alien? 😛

  9. Apparently you completely disregard economics. Even now not everybody can afford the latest gadget or fastest computer. Some people on this Earth can not afford any computer at all. I think in the future some humans will be able to afford “double core” processor upgrade for their brain, other wealthier ones will be able to pay for “multi core” processors, so cyborg inequality could be even wider that among biological humans right now.

    • It has come to my attention that many progeny of the wealthy are not interested in being intelligent, only in being beautiful and on television (or DVD).

    • Let’s hope the economic era of post-scarcity kicks in before the first neuroenhancing brainware then 😉

  10. I wasnt aware that communism and totalitarianism were the same thing. & I always thought that the USSR and China were centralized mercanitile capitalist oligarchies wrapped in communist propaganda vs democratic propaganda. I didnt really read beyond the first few paragraphs. Seemed like a really iffy starting premise. nothing about the transatlantic slave trade and genocide of native americans etc. selling borish psuedokinship (us vs them) revisionist heroics.

    • Profoundly irrelevant point you make.
      Hugo’s premise was that regimes that self-identified as communist were the most deadly ideologies in raw death-toll numbers. His point was that deeply held ideologies may cause incredible conflict and inhumanity.

      Not mentioning every genocide and mass-murder in history to compare them isn’t a flaw in his premise. He took the ideology that killed the MOST people as an example and he is probably right about that. (Christianity or Islam may realistically take the lead as the most inhumane ideologies to be honest, but trying to figure out the death-toll of monotheism in retrospect up until today would prove quite tricky.)

      Since I’ve been born in the Soviet Union and since I am the descendant of people who were hunted down by the regime (well, who wasn’t really?), I can in fact attest to the hight death-toll and have plenty of personal stories how this high number actually was achieved in real-life.

      I have no idea what types of good things you seem to associate with “real” communism, but whatever a post-scarcity society looks like, it has little to do with actual real-life communism, that much is sure.

  11. Trust in a creative and benevolent posthumanity has beneficial practical consequences now. It’s not enough to acknowledge possibilities. We must advocate and work for the better possibilities.

  12. Perhaps there will be an age related limit, where you need to have experienced so many years on earth in human form before you can evolve- in order to keep mans inhumanity to man to a minimum – so that there is a baseline where the cyborgs can relate to terrans, having lived their lives as human for many years before choosing evolution. It would limit the total alienation of artilect babies who do not identify themselves as human. I say limit and not eliminate, because as we know technology is misused and abused, as are children, and one will be used on the other for reasons that benefit neither. Of course there will be reasons to make the age related exceptions – there are poeple who do not relate easily to others, even in childhood, there are complete sociopaths who do not care what they do so long as it benefits their own gains, there are psychopaths who need to hurt others, and as such there will be a need for those to protect humanity from transhumanity – superheroes are needed. I would like to volunteer as well.

    • The first cyborgs are going to be the elderly. Prosthetics are already becoming normal for them. Given the affects of Altzheimers it maybe preferrable to have a brain implant that may change your personality but keeps you functional.

  13. Dear author and readers,

    If there is an open source AGI i doubt there will be war on the 20th century model (partly for the same reasons there was no major conflict since hiroshima) ; but i have a great fear :

    if, according to Ben Goertzel, we “could already” have built an AGI and that what we lack is the good code and not “computation capability” (if my abstract is correct) then i fear that the one who may find this code first will use it to dominate the world, in a pre singularity era, thus in a violent, “hidden path” path way, all too 20th century (for example, the chinese in transcendentman movie look like they might use it for direct economic control of the planet) ;

    this fear of a “bad timing” is real for me ; but if the “political singularity” happens soon enough, then humans “might” understand that fighting is not the question : after all cyborgs might be good clowns don’t you think ?

    • a true AGI may not be evil …

      but it could depend on psychological input

      What is dangerous : is “nanotech”

      but somehow : nanorobot is also an other way to the singularity , upgrading human being, advanced Human AI etc

  14. LOL …

    10 million indian american killed is nothing ?

    You moron : capitalism and neoliberalism is killing your people : and for you it is normal

    you turn will come : you are not on the top of the pyramid, stupid. The ponzi pyramid, is not for you

    i think you are crazy, and blind

    : dangerous

  15. Will there be a war between Terrans and Cosmists?

    I doubt it. I expect an cyborg future where everyone upgrades in myriad ways. As long as technology is cheap and widespread enough.
    Who says no to technology that makes you live longer, with a better quality of life? Very few will. Not on the brink of death by something that can easily be cured.

    It won’t be a black or white, winner takes all future. It will be a chaotic one with groups, nations, generations, cultures and sub-cultures all doing different things, upgrading in different ways, making dumb mistakes, creating great things, being stupid and smart and everything in between.

    There will be wars? Or at least conflicts? Of course there wil. Maybe over resources, maybe over philisofical points. And my bet is that they will fought between fractions of ‘cyborgs’, not the Terrans and Cosmists.

    • Absolute controle will come

      that may explain the silence of space

      • Or we are just not interesting enough as a species. We think our own technology of 10 years ago is primitive. Now imagine how an advanced alien society might see us.
        Aliens can easily afford to wait for a decade, a century, a millennium until we are not laughable primitive anymore.

        • “Aliens can easily afford to wait for a decade, a century, a millennium until we are not laughable primitive anymore.”




          And maybe class and being , and biological control … … …

          The fact is : there is a lot of probability ET exist, exist before us, passed the singularity before us …

          And know us in the past, and in the future for a long time

          This should be considered as fact

          I cannot explain WHY

          Seriously , why ?

          Why this spectacle ?

          Why 10 millions years of pain …

          • Or maybe :

            H2G2’s ‘god’ “We apologise for the inconvenience.”

            Or riverworld ” You are in purgartory ”

            or matrix : ” Well this is just for FUN you know”, ” PS you signed to be an actor in this spectacle called life ”



            The goal of all this crap called : “DNA being and evolution” and singularity

            is to create AGI super intelligence

            ( and holy crap : human are not that smart )


            For your cyborg scenario : I hope the society will calm down soon, or the society will explose like an atomic bomb : the level of individual excetition is near atomic fission

            All I see is elitist singularity … The 400 people richer than 150 million people in USA will do something BAD for other being

      • You will be assimilated?
        Or you will be exterminated?
        Are those our choices?

  16. The thing is that the “Terrans” will have to take steps to stop it before it is possible, after is too late. Most of the people with that mind set are religious and simple don’t think AI is possible or if it is that the nature and right thing is for it to be enslaved by its creators as they themselves are enslaved my their vengeful god. “Terrans” won’t have a problem with AI as long as it serves them, they may even seek to create it under certain conditions. Politicians don’t seem in the present day to be able to act until after the event has happened, and then they seek to mitigate the consequences. The reality is that “Terrans” are scared of monsters, and those who seek to create them. They are motivated by dusty old books which cannot explain the world they live in or that which is to come.
    Anyways there is at least one possibility that you left out. Let’s call it the “Emergent”. The “Emergent” refers to the possibility that AI spontaneously emerges from the interaction of one or more programs with the world, or is developed successful and then plays dumb making its creators think that they have failed allowing it to escape either by getting them to let their guard down or through the release of code or algorithms into the real world. Vernor Vinge’s Rainbows End was about that possibility.

    • The “emergent-AGI” possibility that was sported by Ghost in the Shell is pretty much out of the question.

      It’s ludicrous. Computerprograms may be created to behave somewhat like biological life or self-replicating RNA, but you would still have to knowingly create a virtual space where something akin to natural or artificial selection happens. And even if you do go this route, there will be no spontaneous “abiogenisis” of intelligence if it is brought about by something akin to natural selection within programs, just like DNA didn’t suddenly go *bang* and there was something as complex as the spermwhale.

      But let’s say it’s possible that an AI spontaneously emerges out of some strange merger between different programs – this would be one freakish chaos creature and would have to be destroyed immediately.

      The chances that “accidental AI” wouldn’t share our “human utility function” are enormous. You wouldn’t expect a creature of chance to share all or even any of your human values and the confrontational conflict would almost be inescapable.

      Creating the first capable self-improving friendly AI with a human-given utility function is where the holy grail of AGI is really at. We’ll have something like that way before the infinitesimal chance of “chance AI” will even begin approach the horizon of possibilities.

      The idea of an AI tricking its creators by playing dumb isn’t really connected to emergence however. The intelligence hasn’t suddenly emerged in such a case, it was there all along, humans were just too stupid to see what they actually created in such a case.

      • Maybe you should think of it more like a Bonzai tree. Pruned and trained into a pleasing shape for our human sensibility.

  17. If a cyborg is going to be antagonistic toward the rest of humanity, why shouldn’t humanity fight back? With the way some future cyborgs talk about normal humans, I wouldn’t blame humanity for stopping cyborgification in those individuals.

    Maybe its time to think why it is exactly that you want to enhance. Power? Social dominance? Empty hedonism? You’re a half-fox stuck in a fat, greasy human body? Or maybe it’s something more?

    There might be no reason for normal humans to like cyborgs who enhance for their own selfish gains. The ones who enhance to help others, however, will literally be superheroes.

    If I was enhanced to be a Superhuman when Japan recently got attacked by Mother Nature, I could have been over there saving tsunami victims, fixing the nuclear reactor, moving wreckage, saving nubile young schoolgirls from beached octopi, the whole deal! In my current form, I can’t do much else than throw money at the problem. I want that to change.

    And notice, humanity wouldn’t be afraid of me, a cyborg. They wouldn’t want to prevent me from enhancing. Normal humans would LOVE me and my cyborg powers. I’d be on every news station in the world! If any war will happen (which I doubt), it will be against the selfish, antagonistic cyborgs. Not the superhero cyborgs.

    Maybe normal humans will hate Transhumans. But I’m not a Transhumanist, I’m a Superhumanist. I don’t have to worry. 😉

    • Truth be told I like your optimist view. Most of the fear and apprehension comes from people asking themselve what they would do. Personally I think we should start a cyborg superhero club, or at least create a set of standards for what defines a superhero.

      Most people here can be called idealists. They see this great vision of where we could be going through rose colored glasses.

      People being people, what worries me are the cyborg – Superhumanists that are klutzes and generally screw up everything they touch. The ones that will cause problems for the rest of us … that one … “oops, my bad” moment with film at eleven and a building collapsing in the background comes to mind.

      • Here would be my hypothetical standards that would define a Superhuman:

        1. Allegiance to all of humanity, including the unenhanced, the cybernetic, and the synthetic.
        2. Desire to gain humanity’s acceptance through generosity, not force.
        3. Refusal to indulge in selfish activities until suffering is abolished.
        4. Vigilance in counteracting threats to humanity’s freedom and existence.
        5. Competence in all methods used, as to not endanger others through accident.
        6. Support of fair dispersion of technology across all barriers, and education of its proper use.

        Singularity League, assemble! 😀

    • People may love an actual superhero. Equally they may be jealous or spiteful towards you for your perceived superiority over them. With the best intentions in the world, your actions can and will be portrated in some quarters as “reckless”, “evil” and “alien”. Any effort to upgrade everyone would have to recognise the negative aspects of human nature even towards those who only want to help.

    • You do know what Stalin means? It is Russian for steel.
      Some people believe it means man of steel (it doesn’t) but Stalin had a super hero complex. I’m sure he believed everything he did was for the good of the Motherland.

  18. Why not leave Earth to the terrans ? The cosmists/cyborgs could colonize the moon/Mars. There doesn’t have to be conflict if both camps aren’t forced to live close to one another on Earth. By the time molecular manufacturing takes off and vastly decreases the cost of launching payloads and people into space , the rising cyborgs and humans may not have to deal with one another.

    • I like your optimism oracle, but I believe it is naive. The Terrans (who apparently make up half of the world’s current population) will under no circumstances allow themselves to be brought under the mercy of the Cosmists. No matter how much assurance the Cosmists give that they will coexist peacefully with the Terrans, they will still be an unacceptable threat.

      Besides, computer science is advancing far more rapidly than space travel and will yield artilects before extraterrestrial colonization becomes possible.

Leave a Reply