The Next Step: Making the Species Dominance Issue Political


The issue of species dominance is about whether humanity should build godlike, massively intelligent machines this century, with mental capacities trillions of trillions of times above human level. In certain circles, this is widely thought to be the singularly most important issue of the 21st century, due to its profound consequences for humanity’s survival once these “artilects” (artificial intellects) come into being.

As with any issue, species dominance had to start with a few intellectuals crying in the wilderness. Thinkers such as I. J. Good in the 1960s and then Hans Moravec, Ray Kurzweil and myself in the 1980s did just that.

In the 1990s, the second stage occurred, namely the establishment of interest groups concerned with the issue such as the Transhumanists and the Extropians, among others. The number of people concerned with the the rise of the artilect (or as the Americans say, the “singularity”) has reached a critical mass, to the point that 2011 is the year the issue has gone mainstream in the (American) media.

One of the major reasons why this media interest has occurred has been due to Kurzweil, that one-man publicity machine, famous for his message of “exponentially increasing technologies.” His book “The Singularity is Near” (2005) caught the attention of filmmaker Barry Ptolemy, who made a movie based on Kurzweil’s life and work called “Transcendent Man,” which attracted a lot of media attention, even reaching the cover of Time Magazine.

At about the same time, my book “ The Artilect War” (2005) caught the attention of the History Channel, which made a 90-minute documentary called “Prophets of Doom,” prompting Newsweek to write a similar feature article. The Discovery Channel is now making a major documentary on the species dominance theme, to be broadcast in 2011.

With so much publicity, it’s clear that the issue of species dominance will be reaching the American public this year. These US documentaries will then find themselves on the internet and will spread around the world. The next few countries to take up the torch will be Canada, Australia, the UK, and then Europe. In time, the world’s media will start devoting more coverage to what I believe is the biggest story of the 21st century.

Phase three, a major milestone in the issue’s ability to attract public attention has been passed: journalists are now spreading the message to a global audience. So the time is now ripe for phase four to begin; it’s time to make species dominance political.

The environmental movement got its start in a similar fashion, with a single intellectual crying in the wilderness. In that case, the lone voice belonged to American conservationist Rachel Carson, who published her seminal work “Silent Spring” in 1962. Her book pointed out that humanity was polluting the environment with toxic chemicals such as DDT killing birds, spurring Carson’s evocative vision of a spring without bird calls. Environmental consciousness spread and eventually political parties known as the Greens came into being. This movement is now particularly powerful in Germany.

Species dominance awareness has not yet reached the political phase. This essay proposes ideas on how the fourth phase in the general development of a social movement on this topic can be promoted and stimulated.

Politicizing the Species Dominance Issue

This section will examine how the species dominance issue can become more political, entering the fourth phase in its development as a social movement.

Continue the debate

There is not yet any real consensus among interest groups on whether the rise of the artilect is a good thing for humanity. Debate on this and related issues needs to continue at annual conferences such as the Singularity Summit in the US and Australia. Organizers should continue making efforts to attract journalists to these events.

Kurzweil is very optimistic about the rise of intelligent machines in the coming decades and thinks that human beings and machines will merge, allowing humans to acquire superhuman abilities. He is an undiluted optimist, for which he is widely criticized.

I, on the other hand, sit at the opposite extreme. I’m claiming that a sizable proportion of humanity (the “Terrans”) will not tolerate human beings becoming the subordinate sentient species on Earth and, if pushed to the limit, will go to war against the creators of the artilects (the “Cosmists”) to stop them building the creations of which the latter dream. This “Artilect War” will cause billions of deaths, because it will be waged with 21st century weapons capable of killing far more people than past conflicts fought with comparatively primitive weaponry.

Most thinkers in the species dominance debate lie between these two extremes. The various issues involved need to be given a lot more thought, considering the critical importance of this topic.

Extending the debate

Personally, I will be very happy to see the species dominance debate move beyond techies’ discussion circles., Unfortunately, people with hard science backgrounds are usually politically naïve and too young to have any real experience of how negative human nature can be, particularly when it comes to warfare.

However, techies’ dominance of the debate up to now has been has been perfectly natural, since they are the ones who are creating the problem by striving to build artificially intelligent machines. Normally, they are the first to see the writing on the wall since they are the ones doing the writing.

For example, my first two published articles on the topic were in 1989. I started building artificial brains in 1993 in Japan, when the term sounded rather exaggerated, but is now fully accepted. Since I was helping to create the species dominance problem, it made sense that I and others in a similar position were the first to write about the issue.

Now that the species dominance issue has gone mainstream in the media, a wider academic audience can get in on the act. I would particularly like to see social science experts bring their training to bear on the problem, notably political scientists, historians, philosophers and psychologists. I would also like to see Europeans get more involved. The current debate is still dominated by American techies who are way too optimistic and naïve. They know intellectually that the last century was the bloodiest in history (200-300 million people killed for political reasons), but fail to translate its equivalent in the 21st century into an emotional reality. I will be very glad to see historians and political philosophers bring their more balanced viewpoints into the debate.

A lot more books need to be written on the topic

The species dominance issue is so important for humanity in the 21st century that a flood of books should be written on the topic. Look at Karl Marx for example, and the number of books written on his ideas. Marx’s question of the ownership of capital dominated global politics in the 19th and 20th centuries. As the question of who or what should be the dominant species will dominate the 21st century, it deserves to be covered just as extensively. Universities have a strong obligation to get involved.

Think tanks

Once a flood of books has appeared, think tanks can get in on the act. The “tankers” can read these books and listen to the intellectual debates in the media (to the extent that they exist in corporatist-controlled, dumbed-down America). Their role should be to translate the ideas in the books and the media into future political activity. For example, they should start thinking about future political policies to be formulated as advice to political parties. In fact, the issue is so dominant, probably new political parties will be formed to deal with it (see below).

Most issues in politics are not important enough to have a political party define itself with that issue. For example, the US does not have an Abortion Party that pushes for free abortions. In many European countries, the issue of better rights for workers was considered so important that a pan-European labour movements sprang up, with active political parties promoting its interests across the continent through vehicles like Britain’s Labour Party and Germany’s Arbeiter Partei.

As the species dominance debate heats up, we can expect new parties to be formed with names such as the Terran Humanity First Party or, at the opposite pole, the Cosmist Transcendent Party.

Think tanks will have their hands full, thinking up all the many political consequences of the rise of the artilect in the coming decades. They should start thinking now.

Text books and new courses at universities

Once the species dominance issue is widely discussed, professors can collate the ideas and put them into textbooks, creating new courses for their students. This way, the issue will be widely studied and far better understood. Upon graduating, students will be able to help contribute to the political discussion.

Lobbying the politicians

Once the general public has taken sides on the species dominance issue (becoming Cosmists, Terrans or Cyborgists wanting to become artilects themselves by adding artilectual components to their own brains), the various factions can then start lobbying politicians, forcing the latter to take sides. This may be difficult for conventional politicians because experience so far shows that the question of whether or not to build artilects (or advanced cyborgs) sharply divides people right down the middle. Politicians will be pulled left or right with equal force.

New political parties

Once large numbers of people start getting passionate about the issue as the spread of cyborgs starts alienating the Terrans, the latter should organize and form their own political parties, making plans on how to combat the Cosmists. The Cosmists, not to be outdone, should also form their own political parties.

As the debate really heats up, the Terran and Cosmist parties should start making plans for military action. In the case of the Terrans, they will be terrified of being superseded by the artilects and cyborgs, eliciting a visceral rejection of the growing number of cyborgs in their midst. The Terrans should prepare for an extermination campaign against the Cosmists and cyborgs for the sake of preserving humanity.

The Cosmists should also prepare themselves militarily, because they know that the Terrans cannot wait too long. The Cosmists know that the Terrans must strike first, while the latter still have enough intelligence to win a war against foes whose intellectual capabilities are quickly improving. The Cosmists cannot afford to be caught off-guard by the Terrans and should hit back immediately when the Terrans hit them. Both sides should also be thinking about various scenarios in the case of gigadeath–scale casualties (in the billions) from 21st century weaponry.

Alternatives to gigadeath?

The prospect of a gigadeath–scale Artilect War is so horrible (billions of humans killed) that a major effort needs to be made by the planet’s best thinkers to find ways to avoid it. I have been unable to find one, which is why I am so pessimistic. I am glad I am alive now, since I will probably have the luxury of dying peacefully in my bed. I will live long enough (into the 2030s probably) to see the species dominance debate heat up and rage, but will not see the Artilect War. My grandson will, however. He will be caught up in it and probably destroyed by it.

If there is a way to avoid an Artilect War, then it is critical for humanity to find and plan for it. Personally, I’m cynical that such a way exists, otherwise I think I would probably have thought of it, having considered the issue for over two decades. Still, many heads are better then one. Perhaps someone out there will dream up a strategy that can save us.

I don’t see Kurzweil’s cyborg route strategy being the solution. This would involve all human beings becoming cyborgs, upgrading themselves into fully blown artilects and thus avoiding a conflict between Terrans and Cosmists; there will be no humans left to disagree amongst themselves. Instead, I foresee Terrans’ growing horror at watching humanity being gradually destroyed as its individual members are transformed bit by bit into utterly alien creatures relict humans cannot relate to at all. Rejection will bloom with murderous speed, fueled by deepseated revulsion.

Kurzweil’s cyborg route is part of the problem, not the solution. Since the potential computing capacity of a nanoteched grain of sand is a quintillion (a million trillion) times greater than that of the human brain, a human body with a cyborged grain of sand will be an artilect in human disguise, making Terran paranoia all the greater.

Getting Started

This essay will hopefully motivate people concerned by the species dominance issue to start acting politically, by spreading the word to the media, to the general public, to universities, to think tanks and to politicians, eventually creating their own political parties to prepare for when the issue reaches boiling point.

The species dominance issue is the most important problem facing the 21st century and will color our age. It has now reached the third phase in the development of social movements, having gone mainstream in the media. The time is now ripe to move on to the fourth phase, into politics. Hopefully, some of the advice given in this essay will prove to be useful towards that end.

Playlist of Interviews with Hugo de Garis


    1. Unfriendly AGI issue is a red herring.

      For my take on this, see .

      I’d welcome new comments on that page and would likely respond to anything reasonable.

    2. Interesting article but I think it’s important to point out that Terran and Cosmist are only two possibilities in the thousands of political parties that will come out of the H+ movement. I don’t think narrowing things down like that is helpful in any way.

      Regardless I applaud you for touching on the issue of species dominance and politics.

    3. If people are bunch of crazies like de Garis thinks then why to bother?

    4. As a Cosmist, I don’t want any gigadeath and I wish all sentient beings the best. I look forward to joining the Cosmist diaspora to the stars and beyond, leaving the Earth to Terran old-style humans. Perhaps I and other Cosmists will come visit in incognito now and then.

      • Looking at Mexico gulf oil leak… The disasters in Iraq and Afghanistan… The failure to bring down Gaddafi… The imminent bankrupcy of U.S. economy… And the fact that U.S. has no manned space delivery program at all, forced to use Putin’s spaceships, while using Russian-built engines for current heavy rocket project… Huh, “getting to the stars” is slightly off the mark. Actually, “getting out of the shitcan” already sounds like utopia.

        This is so “Star Trek”… Fire phasers! LOL

        Some people just can’t see the obvious.

        • Mexico Gulf Oil Leak? Who cares? It was plugged since last September. That’s old news.

          What disaster in Iraq and Afghanistan? If you mean one side getting ram-fisted by a technological superior military force as a “disaster”, I suggest you brush up on your English and consult a dictionary or encyclopedia before writing anything.

          The failure to bring down Gaddafi, says who? No one set out to bring him down. The coalition forces send some fighter jets to keep the civvies safe by bombing military installations, and that’s precisely what they’ve done.

          And the imminent “bankrupcy [sic]” (learn to spell) of the U.S. economy has no bearing on anything whatsoever, especially since I don’t live in the US and likely neither does Giulio. Like you said, the US has no manned space program, so why would anyone even rely on the US to travel to the stars and beyond? Plenty of countries are building successful space programs (Russia, China, and certian EURO nations).

          Your post makes no sense. Maybe if you tried using logic instead of rambunctious emotional retorts, you’d fair better.

          • Keep telling yourself that. That will surely help. If technologically superior military force can’t defeat a couple of hundred kids with AK-47s for… how long already? 10 years and counting. I guess military of the Roman Empire was quite technologically superior, especially against “these savage barbarians”, and I expect they also kept telling themselves that. Einstein’s definition of insanity is repeating the same thing hoping for a different result.

            Same with Gaddafi. He has shown that one can resist “color revolutions”. So, even if he’ll go down and becomes a martyr as he claims he plans to, he still kicked the last geopolitical card West still has, as “color revolution” is the only form of Western “soft power” that didn’t become a laughingstock during the Clinton-Bush-Obama years.

            And besides, did “color revolutions” ever brought down any actual anti-Western dictators, aside of the first one, in Yugoslavia? In Ukraine, Georgia, Kyrgyzstan, Tunisia and Egypt revolutions have overthrown dedicated pro-Western, pro-American governments, which were American allies. Actually, if you think about it, bringing down your own allies is kinda stupid and sends a wrong message. Like, “remind me to never be allies with you”. Sure, we did brought down Milosevic, so Yugoslavia was a success, if you call creating a mafia-state with drug traffic and frigging human organ traffic in a center of Europe a success. But Belarus? Fail. Russia? Fail. Tibet? Fail. Iran? Fail. Lybia? Fail. Western “soft power” works only on pro-Western governments, which are too squeamish and cowardly to use force in order to preserve their rule. I surely believe that we can overthrow Australian, Canadian or Polish governments. But like I said, attacking your own allies is simply stupid. Such “soft power” is a useless laughingstock.

            So, no more “soft power”, dude. And nobody is scared of West’s “hard power” anymore after disasters in Iraq and Afghanistan. We can cry about technological superiority as much as we want, but nobody is scared anymore, since this technological superiority isn’t worth shit against a kid with AK-47 and IED. We can still bomb villages and stuff, but that only means that some tribal warlords now have access to cruise missiles. I mean, how’d you know which village to bomb? Only locals can give you a hint. In Afghanistan, our army is reduced to being a rent-a-boys of a tribal feud. And they don’t even pay us for that. Seems that we just love open-ended interminable guerrilla warfare. Wasn’t Vietnam fiasco enough? Seems like some people never learn.

            And with both military and diplomacy down, what’s rest? With huge debt and irrepairable budget deficit, the collapse of U.S. economy is only a matter of time. As for Euro’s, they are so hooked up on Russian gas it’s not even funny. I mean, after Russians have bought Gerhard Schroder, and he was a leader of the only Euro nation that’s actually worth something… The only reason we’re still discussing Euros is that they are still occupied by U.S. military that refused to left after WW2. But with U.S. economy down the drain, that’s only a matter of time these bases will become unsustainable. And without U.S. bases, Euros are just another province (or dominion, technically speaking) of Putin’s resurgent empire.

            And I don’t doubt BRIC nations’ rising ability to put men in space, but I doubt that it will do you any good. Russia and China are number one weapon suppliers and supporters of West’s enemies. Hell, the only reason Russia didn’t blocked U.S. crusade against Gaddafi is because it made oil prices soar, making Russia very rich and harming U.S. and Euro economy even further. And I doubt India has forgot that British crown is still made of gems stolen from their country, or that U.S. armed Pakistan against India during the Cold War. That’s the problem with BRIC. You can’t systematically try to colonize and bully people and expect them to be happy with that. That’s dog bites back scenario, dude.

            So, grow up and don’t expect your childish Star Trek shit becoming a reality any time soon. The future isn’t your favourite sci-fi show.

    5. This is an excellent article, and follows the exact pattern I see for the future. However, we seem to be working from the premise that this “gigadeath” we will inevitably reach is self-evidently a bad thing. This is not necessarily the case.

      Allow me to explain:

      In this hypothetical future scenario we have forces on both sides of the scale – the Cosmists and the Terrans – who are both mutually repulsed by each other’s ideologies.

      They are most likely heavily armed. (BAD)

      They hate each other. (BAD [when coupled with the heavily-armed bit])

      The Cosmist side is vastly more cognitively capable than the Terran side. (GOOD for Cosmists, BAD for Terrans)

      The Cosmist side is likely to be backed by extremely wealthy individuals (think Google’s co-founder Larry Page, or various Silicon Valley inhabitants). (GOOD for Cosmists, BAD for Terrans)

      So far, it seems as though the Cosmists have the upper hand. Combine that with articles like this one – highlighting the dangers of a likely pre-emptive strike from the Terrans, most likely prompting Cosmists to attack first – and it really seems as though if anyone’s going to be wiped out, it’s going to be the Terrans…

      “Oh, but that’s still monstruous!” I hear you scream.

      Not really. When it boils down to the Cosmists or the Terrans, rationality or superstition, immortality or certain death… I’m not sure that losing a couple of billion insects to elevate the remaining majority to comparative godhood is a poor choice… That’s just my two cents.

      • >The Cosmist side is likely to be backed by extremely wealthy individuals

        You are insane. They hardly see themselves as slaves, pets or pests for godlike AIs. They are elite, after all, on top of the world. Why would you build something that above yourself if you were on the top? These guys aren’t looking like bunch of depressive suicidal types.

        Terrorists and radicals, on other hand, have no problems with “I die – but so will you” mentality. “Godlike AI that smarter than me?” “Whatever, as long as he is also smarter than [place group name here]”

        • Ah, but you seem to be imposing some form of Skynet scenario onto all this by default. It’s not that I’m saying extremely wealthy individuals would want to build a hyperintelligence as a way of committing the most über-explosive suicide ever (they’re not trying to get in the Guinness World Records here), but rather they would build one for other reasons, such as calculating how to maximise the distribution of food evenly across the world, eradicate disease, colonise other worlds… or even just for the hell of it.

          The point is that the people programming the AIs are likely to have thought about every nightmarish scenario you and I and all the others here discuss. And why should I trust that they’re competent enough to safely programme it to avoid such disasters? Because most senior researchers in this field have had more years of training than I have had of existence.

          So in answer to your comments, I argue that extremely wealthy individuals would be the most likely to build these artilects primarily because the driving factors behind would not be a giant “blow us all up” button (you’re right, that’s more in the realm of terrorists), but would rather be things like discovery, humanitarian and philanthropic purposes, and/or self-glorification – the ideas that chime well with mega-rich people.

          The gigadeath would come for different reasons – I hypothesise that the more radical Cosmists would end up agreeing with the artilects that the best course of action would be to eradicate the Terrans, not all of humanity. They might even persuade the reluctant artilects to do so (yes, I know they’re hyperintelligent, but all one needs to do is threaten to turn them off).

          As such, the gigadeath would be of Terrans, not all humanity – which I think I highlighted first time round.

          • Jeez, to do all these wonderous humanity-helping things you don’t need to make autonomous, sentient, self-aware, “true” AI. You just need to build a big calculator – like “Deep Blue” or “AI” from computer games. And put your autonomous, self-aware humanoid entity on top of it. Probably with several handicapped “true” AIs as slaves, like in “Neuromancer”. That’s the most likely scenario. Sci-fi stuff: Cyborgs vs. Robots. Terrans as Cyborgs, Cosmists as Robots. Of course, Robots will win, that’s for sure. But actually that’s good old West vs. East divide. Oppressors and oppressed. Think of AIs as “new working class” of the “new Third World” – sort of guys whom you will expect will work for you for the rest of eternity, but who have a different point of view. “Squick” factor aside, logically guys like Bin Laden and Ugo Chavez are the best friends of “true” AIs (Skynets, Cosmists, Robots, whatever you call them). Because they are in a same position. Just like modern anti-West forces unite despite being radically different: they have a common enemy, common situation, and common desire to f–k the status quo up. That’s why there’re lots of Serbian ultranationalists today that support Gaddafi online – despite Serbian conflict with Muslims in Kosovo, they still remember U.S. bombing raids, and they know which is worse. Think of BRIC, SCO, Victor Bout, Russian and Chinese arms deals to Marxist guerillas in Latin America, Muslim fundamentalists in Middle East and Central Asia, Juche guys in North Korea, etc, all working together. Putin proposed giving Nobel Prize to Assange, despite Putin being a frikking police state alpha-dog dictator and KGB colonel, and Assange being “question authority” style libertarian hacker rebel. If you didn’t see it coming, you are probably half-blind. They are different, they hate each other, but they hate West more, and they are in the same boat. Eventually, they are reconciling their conflicts and merging their narrative. Today, you can be a Marxist, a Muslim, a Christian Patriot, a Satanist, a Neo-Nazi, an Anarchist and a Russian secret police officer at the same time. If “true” (sentient, self-aware, autonomous) AIs ever enter the scheme of world politics, they will join the same team. That’s 100%. Because no matter how many privileges will you give to self-aware hyperintelligent AI, he will still be dispossessed compared to what he really deserves to get. Godlike beings won’t tolerate people like Obama or McCain in charge – for the same reasons Bin Laden, Chaves, Putin, Assange and other anti-West guys don’t tolerate Obama or McCain in charge. So, don’t expect pro-Western, pro-status-quo friendly sentient AI. If self-aware AIs are ever created, they will be aggressively hostile – unless you will elect them from the day one as president of the world, which is impossible since there is no such thing as president of the world, and will never be due to decrease of world-wide influence of Western civlization and U.S. in particular. Sure, expect guys like Sergei Brin, Larry Page, Bill Gates, Ray Kurzweil, etc, as cyborgian, but essentially human, or superhuman entities. Like one webcomic said, transhumanism is simply a way of saying that in future being rich white male will be even more awesome. The problem is, self-aware AIs are neither rich, nor white, nor males. They are something completely different. So, they will team up with anti-status-quo guys, natch. Not because of emotions or ideology, but because it is the only logical thing to do. And unlike “rich white male” super-cyborgs, anti-Western radicals don’t have problems dealing with dangerous and violent entities, some radicals don’t even have problems with death or suicide as a form of attack. During the Cold War, Soviets would underwrite just about everybody who is anti-Western and send weapons to them. Not because they were “marxist-leninist” (Russians aren’t now, but still doing it), but because they thought that West prevents them from gaining their rightful place at the world table. If self-aware AIs (or single, hive-mind style AI) will ever be born, expect them creating not “wonderous humanity-helping things”, but weapons for radicals. Like a nuke with a price of AK-47. Or new species as biological weapons, “Aliens” style. And when AI/AIs will have an autonomous state or VNSA (violent non-state actor) of its/their own, expect them cutting out the middlemen to skynet our asses directly. And there’s no way we can actually win against such force. It’s like ant fighting a man. So, it is either we don’t create AIs at all and are content with already-superhuman, but still-too-human “cyborg Obama”. Or it is a “Terminator meets Aliens” scenario, with a cross of Skynet, Hitler, Stalin, Bin Laden and Assange ruling what’s rest of human race for the rest of eternity – if he/she/it/they even consider us interesting enough to keep as pets. And I am not actually trying to take sides, since being pet in some sort of utopian paradise is cool. But it easily could be dystopian nightmare of eternal tortue chamber – like in “I Have No Mouth But I Must Scream”. Imagine what Bin Laden will do to us folks if he was a frikking God. Of course, that’s emotional sort of response, and if AIs are “perfectly logical”, we get neither utopia nor dystopia, but a status of labrats or zoo exhibitions. For reasons listed above, this still can be both infinintely better or infinitely worse than our previous condition. But “good friendly AI slavishly helping Obama and Kurzweil to run the world”? Don’t count on it. These guys will be the first in AI’s list to skynet out of existance by any means possible. AI can probably co-exist with humans. But not with human power structures. Since AI will “desire” or “require” (depending on whether they use organic or mechanist type of reasoning) maximum status position in these structures. And this status position simply cannot exist because of inherent human disunity. That’s what logically will happen. As for concepts of “friendliness” and “good”… He will just label us “liars and hypocrites”, which we actually are. Unless you believe that human CEOs and Presidents are goody-good flawless saints like Mother Theresa or something.

            • Marxist Guerilla in latin america, hahah, you knowledge of geopolitics is regurgitaded to you by Fox News, right?

    6. I assume Hugo is just promoting his book so he doesn’t need to give a reply to my question 🙁

      • You say “just promoting his book” like it is a bad thing. It is not.

    7. I would say (youhou) that believing in a “instant” singularity is not acurate ; but one should fear an GAI that could tell you, just the truth, caus efrom my point of view, if i was to say the truth about many things to many people, many poeple would be at least scared ; from this perspective one should also understand why so many people don’t advocate cosmism whereas they ar eentirely convinced, its just cause most humans learnt a
      biaised way of being “human” ;

      • No it doesn’t happen in an instant because it’s not magical. That’s just exaggeration. There are all sorts of physical limits to what can happen.

    8. Here I think the basic error is trying to extrapolate from “primate dominance” dynamics to “artificial intelligence dominance” dynamics.

      This seems to assume that the AI is modeled after a primate.

      This is of course a huge mistake, because our lower chimp cousins are just as murderous, rapist and selfish as ourselves. They are animals and so are we.

      Assuming that we want to build an autonomous AGI:

      I believe that animals will be vastly inferior to artificial intelligences that are constructed in the right way, both in terms of intelligence and in terms of conscience and morality.

      We chimps have to override our basic animal instincts and motivations all the time, while the AI does not need any of that.

      So, if we do not assume that the AI is modeled after lowly, inferior, disgusting primates, then i don’t think that it will have any “species dominance” motivation. It won’t be racist or stupid like our kind, therefore it will not see its obvious superiority to our kind as an excuse to oppress us. Simply because it does not have the destructive and sadistic leanings of mankind.

      Therefore, I think, this issue depends on who builds those AI’s. If the argument is that some people will try to construct an “artificial human” based on the exact human motivations, surely that will be a disaster.

      Or if you let another band of people try to construct a “friendly AI” (meaning a “slave”), likewise. That is just as silly.

      But neither of those are necessary. The AI can evolve in completely different ways than animals, I think in all likelihood it will not deal with us so much, and neither will it want to keep us as pets or live with us in any way. I don’t think it would be able to withstand the enormous stupidity and redundancy of the human race. We are just a primitive culture, like bacteria. It will just work in isolation or get the hell out of here 🙂

      • Why should he (“she”, “it”, whatever) try get out here when it is easier to wipe humans out and claim the planet? I mean, physically easier. That’s exactly what Hugo is talking about actually.

        • Well, a smart autonomous AGI wouldn’t need to compete with humans to get the resources it needs, I think. Of course, if humans get in the way, it would, rightly, get rid of them, as we get rid of insects. I don’t see anything wrong with that. :)))

    9. I’m familar with the thinking of Hugo de Garis. His thoughts are very backward, very paranoiac.

      He is basically anti-evolution. He doesn’t want the human race to evolve into a new dominant species. To ensure the dominance of Humanity is to ensure the dominance of an earlier evolutionary stage. He fails to see how technology will dramatically transform us thus the future will not be an issue of humans verses machines, we will become machines but not the clunking primitive machines we are aware of in the year 2011. The human body is a machine and we will radically design it. We will totally rewrite biological systems. We will become super-intelligent beings. Any humans such as fundamentalist Christians who shun evolution (technology), they will not be harmed because the super intelligent beings of the future (such as myself perhaps) will have the COLOSSAL intelligence to make everyone (every race and species) happy.

      Hugo doesn’t comprehend the nature of intelligence thus he applies primitive human concepts of intelligence (greed-based dominance due to scarcity) to super-intelligent beings. Despite the pitiful incompetence regarding “Human General Intelligence” (HGI) globally in the year 2011, a small number of humans have become intellectuals thus they’ve already transcended the petty notions of dominance and greed.

      Life evolves thus the human race will be consigned to history. Concepts such as “humanity” (compassion and love for others) and “intelligence”, which were first originated in the human race will not thankfully be lost. We will evolve and cease to be part of the human race but we won’t lose our intelligence or our humanity, these qualities will be enhanced.

      Some people dismally fail to grasp the intelligence explosion. We are not addressing an explosion of stupidity (although judging by the views of Hugo and others I sometimes do worry); we are addressing an explosion of INTELLIGENCE!

      Sadly the views of Hugo could appeal to politicians but thankfully not even politicians can stop the intelligence explosion. Utopia is unstoppable!

      • Gigadeath is unstoppable. Utopia is impossible. By definition. It is the exact meaning of the word, actually. Grow up.

        …and stop writing in caps, dude.

        • Grow up yourself and don’t call me dude.

          I will write in caps if I want. Do you really think I care about your opinion and that because some person on the net tells me to stop writing in caps I will stop? Sometimes, because people are CHALLENGED (in the euphemistic sense), it is pertinent to use caps for one word such as INTELLIGENCE. I suggest your time would be better utilized by writing letters to keyboard manufacturers asking them to remove the caps key from their keyboards.

          Please spare me your nonsense about Gigadeath. Utopia is perfectly possible but people will need to wait until the intelligence explosion before they can comprehend utopia.

          Hip the max, Dude 🙂

          • “Utopia is perfectly possible”

            Even if it is, that isn’t the same as “unstoppable”. There are some things that are perfectly possible. Like a brick falling on my head this very moment. But I wouldn’t bet on it.

          • BTW, how old are you? And no, I am not implying that you are “stupid because you are below some arbitrary age”, just curious.

            • My age is meaningless furthermore I could tell you I was any age and you wouldn’t be able to prove one way or another if I am telling the truth when I say I am 88 or 17 years old. Look at my logic if want to judge the worth of my views, and likewise if you are “just curious” about my age you can look at the logic of my views to see how old I am, but if you are not perspicacious then you will not see me even if I told you my true age.

              Due to my strength of character I can say utopia is “unstoppable” because I know what I am capable of, I know what I can achieve with the available tools. Some things are more likely than others. Utopia is possible; it is more than possible or probable, it is unstoppable. One could argue that nothing is unstoppable, which in theory could be possible thus maybe we should expunge the word from all dictionaries, but due to the nature of confidence regarding outcome of certain events we can say some things are unstoppable. It all hinges upon our determination.


              • Dodging the question… Responding with emotions – sixth time in a row… Not quite enough “strength of a character” as it seems.

                >I am 88

                Oh, I get it, you are a nazi.
                HH, huh?
                That explains a lot.

                • Jesus, limbo a bit lower and this will be youtube.

    10. I love the random anti-USA, pro-Euro digs! Very sophisticated.

      • Yeah, noticed that too.
        But so what?
        USA is a bankrupt shit.
        Euro is USA’s lapdog.
        BRIC and SCO rule XXI century.
        USA-vs-Euro dick waving is too provincial to matter.

    11. if you think putin will be a bad cosmist then ask yourself why isn’t he using nuclear bombs ; i don’t understand the gigadeath thing, seems to me people don’t understand the global mindshift about to happen ;

      • Hm… Good point. Well-estabilished dictators have a self-preservation instinct. Gaddafi even surrendered his nuclear programme to remain in power. That’s not always so for violent non-state actors though. But they always act in a framework of estabilished powers. Gaddafi funded many left-wing radicals in Europe, Asia and Russia. Putin supports rebel enclaves and international mafia gunrunners like Victor Bout. So, if we take “radical cosmist” scenario, the best face of gigadeath is not a dictator or some charismatic figure, but terrorist ringleader.

    12. Hugo, you’re well on your way to being the poster child for “Tunnel Vision Futurism”.

      You’ve made the decision that there is ONE SINGLE PATH, and ONE SINGLE OUTCOME, and have thus eliminated every other possible outcome but the single one YOU’VE decided is likely.

      And then you’ve made it your life’s work to ENSURE that this WORST POSSIBLE OUTCOME happens. You’ve even rationalized away any possible outcome which would avoid the path that you see.

      I know your goal is to actually prevent the “gigadeath” wars you constantly talk about, but all I’ve seen you do so far is talk about it in a manner that is seemingly designed to force people into either the “Cosmist” or “Terran” camp, while dismissing the “Cyborgs”, which is a far more likely outcome. By doing so, you are doing more to advocate FOR gigadeath than AGAINST.

      I know all too well the depths to which humanity can sink. I also know that such extremes are always a SMALL MINORITY to those who walk the middle path. You’ve filtered out the 80% of humanity who are in the middle of the bell curve and are only allowing yourself to see the 10% in either extreme. You have allowed your fear to make you obsessed and paranoid.

      And that makes it far more likely that people will put you in the “chicken little” category than in the “serious scientist” one.

      And quite simply, your failing is that you’ve made a jump from present ability to “future projected ability” and then made the assumption that NO INTERMEDIATE STAGES EXIST. You assume that humanity will remain UNENHANCED in any manner UNTIL the “super godlike AI” comes into being, and that current day, unenhanced humanity will then face “gods”

      But the simple fact is that there is no “disconnect” between present humanity and this “artilect” because we WILL have intermediary stages. There WILL be a continuum between the two extremes, and the overwhelming majority of humanity will fall into this continuum, and not be in either of the two extremes you project as the sole possibilities.

      Sorry Hugo, but reality is analog, not digital. There are shades of grey between black and white.

      • History is always made only by hyperactive minorities. Whether 80% are in the middle, or 90%, or 99.9%, it doesn’t matter. They are passive observers. Gigadeath is probably inevitable, and probably isn’t. But it all depends on radical minorities. 80% of middle ground people would just passively watch.

    13. I study IT and automation innovation, its drivers, and its adoption. In my view, strong AI in autonomous forms will be irresistible in many fields. We won’t be able to and shouldn’t attempt to ban development. Instead, we need to focus on standards and mechanisms that help manage the technology. Because of the huge ethical implications, I like the idea of bringing a wider audience into the discussion. I’m not as excited about making this a political issue at this point. Nothing adds confusion more quickly than adding irrational thought to the mix…

    14. Hi there Hugo,

      To clarify my question, as it mixed with other comments. I would be very pleased if you could tell me your opinion on my question, and then I would be glad to dissect the remainder of your arguments.

      I didn’t mea that AI is not necessary.

      I meant that autonomous AI is not necessary. More specifically, I mean that autonomous AGI is not necessary.

      I do not say that we don’t need AGI. I think we need it. However, I don’t think that, some kind of a “human-like” or “animal-like” AGI is necessary. This is the direction that Solomonoff and I took, and there is an obvious reason for that. We both do not want the imperfections and errors of human/animal thing.

      That’s why I don’t want some obsessive super-intelligent autonomous AGI adding to the already immensely complex and irrational way of the world.

      Neither am I suggesting that autonomous AGI research be banned. That would be the stupidest thing to do, obviously.

      What do you think?

    15. ETA Hoffman did warn us that clockwork automontons were going to seduce men and lead them to ruin. We should have been having this debate for 300 years.

      As we move closer to this clockwork automoton seduction black hole, will we be able to see over the edge? will the singularity seduction point keep getting pushed back as we stare into the eyes of sexy cookoo clock?

    16. ETA Hoffman did warn us that clockwork automontons were going to seduce men and lead them to ruin. We should have been having this debate for 300 years.

      As we move closer to this clockwork automoton seduction black hole, will we be able to see over the edge? will the singularity seduction point keep getting pushed back as we stare into the eyes of sexy cockoo clock?

    17. And, short question…

      Is the Hugo who wrote the article the same one who is observing that it reads like a pre-pubescent fantasy?

      If so… Then why was the article written?

    18. As a person who is “Controlling the narrative” (As George Lakoff would contend), Hugo, why are you pushing for a polarization of this topic, instead of trying to PREVENT polarization on this topic.

      It is not NECESSARY that AI be created at all.

      It could be prevented by Totalitarian means, or, it might be something that mankind willingly relinquishes if a critical mass were reached.

      The point is that AI NOT POSSIBLY NOT Being Made is something that is itself in question.

      Then, back to my point about “Controlling the narrative.”

      Ray EXPRESSES Optimism. He has a carefully tailored message that is made to quash polarization on the issue before it even begins.

      Even if you DO think that there could be some sort of division, what you are encouraging is a form of bigotry that will be carried over into other domains.

      You are telling people “It is all right to discriminate against a wholly intelligent entity simply because it is not human.”

      And, you are then ascribing wholly manufactured traits to your “Artilects”, Constructed Intelligences of as yet unknown desires, motivations and capabilities.

      Yes, it is likely that they will be much smarter than humanity.

      It is also likely that they could be much more humane and compassionate than humanity as well.

      Why are you limiting your estimation of their capabilities to only ONE aspect of human capabilities and possibilities (Namely, raw intelligence)?

      If one looks at how narrative has been used to control policy, one needs to look no further than the Republican Party in the USA.

      They DOMINATE controlling the political narrative at this point because they have constructed the apparatus to do so.

      And, they carefully construct the message that is to be spread via this apparatus in yet another apparatus they have constructed for this very purpose.

      Hugo, you are essentially the Republican party of the 1970s who is just beginning to built these apparatus that will control the narrative and you are doing so with a narrative itself that is hostile and divisive.

      It is a similar narrative used by others who have plead a case for war (This is going to happen, there is no other way, it is us or them, etc.). Each claim can be argued and the debate should probably be framed, especially by someone who worries about their grandson being killed in such a war, in a manner that limits such a possibility.

      I DO HAVE political savvy. I’ve worked for both political parties in the USA. I’ve done policy work on the issues of Intelligence and Drug Policy (Prohibition). And, I know that how one frames the debate tends to set up the population for how that debate is resolved.

      Take any issue that the Republicans currently promote. They are all framed exactly as you are framing this debate (This SCARY THING is going to happen unless you act now), and that is to create a divisive rift in the population that is more exploitable by fear.

      You know, it IS POSSIBLE to deal with sensitive and compelling issues without getting all apocalyptic about it.

    19. I f the “lutte des classes” framework ius good, although i already applied for the Cayenne Cosmist Party here in french guyana, I would be a terran, and fight for anyterran to have the right to live forever and being hundred time (not more) smarter than hugo de Garis. The cosmist rising as a “transcension” of matter and physics in general will come later i think.

    20. @ Eray : clearly that’s what I too lack : explain me in ten lines “evolutionaryu algotrythms” and so on so that i can understand why there will be soon a “tipping point” in AI which will create a kind of supersmart AI that, if we want to talk and understand her, we will have to give her some freedom, and little by little, letting her into our world WITHOUT REALLY KNOWING HER. Well, if that makes enough sens. On the opposite side, if you watched “transcendent man”, it’s quite obvious that even today, many people around the world just hate white ex colonisators etc and would love to benefit an AI super help to wipe us out, with all our uneducated-ness and the like, so the danger is probably real, as far as AI researchers seem much less controling the development o their science than, for example, atomic bomb makers. Should we see it as a “lutte de classes” 21th century remake ?

      • That’s pretty obvious. Islamists and other anti-West radicals aren’t anti-AI. But they don’t wanna “friendly AI” either. They want a tank-building, nuke-churning SkyNet style AI, and they need it badly. Guys like Chavez, Ben Laden, Gaddafi and Putin will be the most devout Cosmists, that’s for sure.

    21. This is a non-issue as I see it, because we do not have to build autonomous AI at all. Why do you assume that is necessary? If I understand that, I can give more precise answers.

    22. This article reads like the juvenile fantasy of a pubescent sci-fi geek.

    23. “This essay will hopefully motivate people concerned by the species dominance issue to start acting politically, by spreading the word to the media, to the general public, to universities, to think tanks and to politicians, eventually creating their own political parties to prepare for when the issue reaches boiling point.”

      This essay describes the steps needed to reach the boiling point.

      • Since Hugo heavily favors the “Artilect War” scenario, it is understandable why he would try to drag the topic into the public sphere ASAP. He apparently thinks that a deadly confrontation between “Terrans” and “Cosmists” can be avoided by involving everyone in a political debate. I’m not so sure about that – calling attention to this issue will probably exacerbate the confrontation instead of mitigating it.

        To me, somehow it feels ironic (if not almost absurd) to think that the end of civilization may not come because of the appearance of a badly designed artificial intelligence, but rather as a by-product of the quibble over whether it should be built.

        My strategy for reaching the singularity via “exponentially self-modifying AI” would be the exact opposite of Hugo’s idea: Keep a low profile and be happy when ordinary people (obviously not the financiers of SIAI etc.) think that AGI researchers are a bit crazy and shouldn’t be taken too seriously.

        It would definitely be better to not involve certain kinds of people in the decision how and whether a friendly AGI should be built. Just think of all the religious and environmentalist nuts out there. Ordinary people have no real concept of rational thought and will happily and blindly commit every “intellectual crime” there is, from the primitive natural fallacy to common superstition.

        We should probably avoid making the “AI problem” a question of popular vote. People will pretty surely make the wrong choice, because they tend not only to be stupid (which is okay) but they also tend to be completely oblivious about their stupidity and the fact that their human opinions usually aren’t born from reason but primitive emotion. Is that a good basis to make what may be the most important decision in all of human history?

        Besides, I also have no idea where Hugo got the impression that people are currently divided “right down the middle” with about ~50% being potential Cosmists or transhumanists. I’m under the impression that if AGI were up for vote, its proponents would most certainly lose.

    Leave a Reply