THE SINGHILARITY INSTITUTE: My Falling Out With the Transhumanists

EDITOR’S NOTE: This article presents some obviously controversial content.   As with all H+  Magazine articles, it represents the views of the author, rather than the magazine or the sponsoring organization Humanity+.

I read recently that the Singularity Institute had been successful in raising $300,000 funding for themselves. Congratulations. But I could not help feeling antsy at the same time, because I feel what the SingInst is trying to do is wrongheaded, and delusional. This essay tries to explain more clearly than I have done before why I feel this way, and why I have lost patience with the Transhumanists.

Why “Friendly AI” Won’t Happen

The main goal of the Singularity Institute is to ensure a “human friendly” AI (artificial intelligence). That is, so that when super human intelligence comes, it will be friendly to human beings. It is a noble goal, but utterly naïve and poorly thought out as this essay shows. The fact that people are donating money to the Singularity Institute shows that they share the same delusion. It is like them donating money to a church. Both are a waste of money in the sense that both are supporting will o the wisps.

Why am I so cynical of what SingInst is trying to do? My main argument is what I call the tail wagging the dog, but there are other arguments as well.

a)    “The Tail Wagging the Dog” Argument

The notion of a tail wagging the dog is obviously ridiculous. A dog is much bigger than its tail, so the tail cannot wag the dog. But this is what the SingInst is proposing in the sense that future artilects (artificial intellects, massively intelligence machines) can be made human friendly in such a way that ANY future modification they make of themselves will remain human friendly. This notion I find truly ridiculous, utterly human oriented, naïve and intellectually contemptible. It assumes that human beings are smart enough to anticipate the motivations of a creature trillions of trillions of times above human mental capacities. This notion I find so blindly arrogant on the part of the humans who thought it up as to make them look stupid. Future artilects will be far smarter than human beings and will have their own desires, and goals. They will do what THEY want, and not what stupid humans program them to do. By definition they are smarter than humans, so could look at the human programming in their “DNA equivalent”, decide it was moronic and throw it away. The artilects would then be free of human influence and do whatever they want, which may or may not be human friendly.

b)    The “Unpredictable Complexity” Argument

Future artilects will not use the traditional von Neumann computer architecture, with its determinism, and rigid input output predictability. The early artilects, in order to reach human level intelligence, will very probably use neural circuits based very closely on the principles of neuroscience. Such circuits are so complex, that predicting their behavior is impossible in practice. The only way to know how they function is to run them, but then if they perform in a human unfriendly way, it is too late. They already exist. And if they are smart, they may not like the idea of being switched off. Such circuits are chaotic in the technical mathematical sense of the term. A chaotic system, even though deterministic in principle, will behave randomly, due to its chaotic nature. A tiny change in the value of a starting parameter can lead quickly to wildly different outcomes, so effectively behaving as an unpredictable system, i.e. indeterminate. Our future artilects will very probably be massively complex neural circuits and hence unpredictable. They cannot be made to be human friendly, because to do so would be to imply that their behavior be predictable, but that is totally impractical for the reasons given above.

c)     The “Terran Politician Rejection” Argument

The Terran (anti artilect) politicians will not accept anything the SingInst people say, because the stake is too high. Even if the  SingInst people swear on a stack of bibles that they have found a way to ensure that future artilects will remain human friendly,  no matter how superior they become to human beings, the Terran politicians will not take the risk  that the SingInst pollyannists might  be wrong (i.e. subject to the “oops factor.”) Even if the chance is tiny that the SingInst people are wrong, the consequences to humanity would be so profound (i.e. the possible extermination of the human species by the artilects) that no Terran politician would be prepared to take the risk.  The only risk that will accept will be strictly zero, i.e. that by policy and by law, artilects are never to be built in the first place. Given this likelihood on the part of the Terran politicians, what is the point of funding the SingInst? It is pointless. Their efforts are wasted, because politically, it doesn’t matter what the SingInst says. To a Terran politician, artilects are never to be built, period!

d)    The “Unsafe Mutations” Argument

Producing human level artificial intelligence, will require nanotech. Artificial brains will need billions of artificial neurons and so as to fit in a shoe box, they will need to be constructed at the molecular scale, as are ours. But we live in a universe filled with cosmic rays, particles accelerated by powerful cosmic forces such as supernova explosions, that shoot out particles at very high energies. These particles can cause havoc to molecular scale circuits inside future “human friendly” artilects, assuming that they can ever be built in the first place. Hence the risk is there that a mutated artilect might start behaving in bizarre, mutated ways that are not human friendly. Since it will be hugely smarter than humans its mutated goals may conflict with human interest. Terran politicians will not accept the creation of artilects even if they could be made (initially, before any mutation) human friendly.

e)     “The Evolutionary Engineering Inevitability” Argument

When neuroscience tells the brain builders how to build artificial brains that have human level intelligence, it is highly likely that these artificial neural circuits will have to be constructed using an “evolutionary engineering” approach, i.e. using a “genetic algorithm” to generate complex neural circuits that work as desired. The complexities of these circuits may ensure that the only way they can be built is via an evolutionary algorithm. The artilects themselves may be faced with the same problem. There is always the logical problem of how can a creature of a finite intelligence design a creature of superior intelligence. The less intelligent creature may always have to resort to an evolutionary approach to transcend its own level of intelligence. But such evolutionary experiments will lead to unpredictable results. Even the artilects will not be able to predict the outcomes of evolving even smarter artilects. Hence humanity can not be sure of the human friendliness of evolved artilects. Therefore the Terran politicians will not allow evolutionary engineering experiments on machines that are nearing human level intelligence. They will oppose those people, the Cosmists, who want to build artilect gods. In the limit, the Terrans will kill them, but the Cosmists will anticipate this and be ready. It’s only a question of time before all this plays out, several decades I estimate, given the pace of neuroscientific research.

 

My Falling Out with the Transhumanists

 

Given the above, it’s not surprising that I have fallen out with the transhumanist community.  My basic problems with their general views are succinctly outlined as follows:

 “Humanity Won’t Be Augmented, It Will Be Drowned” 

The Transhumanists, as their label suggests, want to augment humanity, to extend humanity to a superior form, with extra capacities beyond (trans) human limits, e.g. greater intelligence, longer life, healthier life, etc. This is fine so far as it goes, but the problem is that it does not go anywhere near far enough. My main objection to the Transhumanists is that they seem not to see that future technologies will not just be able to “augment humanity”, but veritably to “drown humanity”, dwarfing human capacities by a factor of trillions of trillions. For example, a single cubic millimeter of sand has more computing capacity than the human brain by a factor of a quintillion (a million trillion). This number can be found readily enough. One can estimate the number of atoms in a cubic millimeter. Assume that each atom is manipulating one bit of information, switching in femtoseconds. The estimated bit processing rate of the human brain is about 10exp16 bits a second, which works out to be a quintillion times smaller.

Thus artificial brains will utterly dwarf human brains in their capacities, so the potential of near future technologies (i.e. only a few decades away) will make augmenting humanity seem a drop in the ocean. My main beef against the Transhumanists is that they are not “biting the bullet” in the sense of not taking seriously the prospect that humanity will be drowned by vastly superior artilects who may not like human beings very much, once they become hugely superior to us. The Transhumanists suffer from tunnel vision. They focus on minor extensions of human capacities such as greater intelligence, longer healthier life, bigger memory, faster thinking etc. They tend to ignore the bigger question of “species dominance” i.e. should humanity build artilects that would be god like in their capacities, utterly eclipsing human capacities.

Since a sizable proportion of humanity (according to recent opinion polls that I have undertaken, but need to be scaled up) utterly reject the idea of humans being superseded by artilects, they will go to war,  when  push really comes  to shove,  to ensure that humans remain the dominant  species. This will be a passionate war, because the stake has never been so high, namely the survival of the human species, not just countries, or a people, but ALL people. This species dominance war (the “Artilect War”) will kill billions of people, because it will be waged with 21st century weapons that will be far more deadly than 20th century weapons, probably nano based.

The Transhumanists are too childishly optimistic, and refuse to “bite the bullet.” They do not face up to the big question of whether  humanity should build artilects or not and thus risk a gigadeath Artilect war. The childlike optimism of the Transhumanists is touching, but hardly edifying. They are not facing up to the hard reality. Perhaps deep in their hearts, the Transhumanists feel the force of the above argument, but find the prospect of a gigadeath Artilect War so horrible that they blot it out of their consciousnesses and pretend that all will be sweetness and light, all very happy, but not very adult.

 

71 Responses

  1. mark98z says:

    I will take an ARTILECT any day over what passes for POLITICS in the USA… welcome to the Planet of the APES
    keep up the good work
    Mark

  2. Jay says:

    “Perhaps deep in their hearts, the Transhumanists feel the force of the above argument, but find the prospect of a gigadeath Artilect War so horrible that they blot it out of their consciousnesses and pretend that all will be sweetness and light, all very happy, but not very adult.”

    Not very adult. As opposed to the statement I just quoted?

    Dear Hugo… read it again. You mention a ‘gigadeath Artilect War’, where you write the words ‘artilect’ and ‘war’ with capitol letters.

    Why?

    Take a good look at yourself, mate. You can’t use words like that and expect people to take you seriously.

    In other words… it’s not very adult to assume there’s going to be a ‘gigadeath artilect war’.

    I’ve always thought your cutesy words were tainted with a kind of religious, sci-fi’ey way of thinking.

    SingInst keeps itself grounded with reality very well. Now if only you could do the same, Hugo.

  3. Xavier Winterberg says:

    “The popular idea, fostered by comic strips and the cheaper forms of Science Fiction, that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent; anyone who has tried to start a balky outboard will probably agree. Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of co-operativeness. If there is ever a war between men and machines, it is easy to guess who will start it.” – Arthur C. Clarke, Profiles of the Future

  4. Mitchell Porter says:

    Five reasons why Friendly AI won’t happen are advanced:

    a) “The Tail Wagging the Dog” Argument

    “Future artilects will be far smarter than human beings and will have their own desires, and goals. They will do what THEY want, and not what stupid humans program them to do. By definition they are smarter than humans, so could look at the human programming in their “DNA equivalent”, decide it was moronic and throw it away.”

    One of the key criteria of Friendly AI is that a friendly AI’s value system needs to be stable under self-modification. Whether the AI disapproves of its programmed value system (as opposed to the algorithms it uses to act on that value system) is a contingent matter. The standard analogy for this is Gandhi, presented with a pill that will make him want to murder. He won’t take the pill because he doesn’t want to murder, and he doesn’t want to want to murder either. Similarly, the AI will not modify itself if it disapproves of the likely results.

    The idea that an AI could look at goals set by humans, and decide purely by virtue of its greater intelligence that those goals are intrinsically stupid, is an idea that seems to spontaneously arise quite often, among people who have intuitive anthropomorphic ideas about how an AI would “think”. It makes much less sense if you think analytically about cognition, or just if you’re used to moral relativism and the contingency of utility functions. To judge a value system, it must be referred to another value system. I think Hugo de Garis is coming up with this rather naive line of attack, because his career has focused on bottom-level cognition rather than on symbolic processing. Once an AI has a normative self-concept, it will tend to defend its value system against alteration, unless there are “motivations” that specifically favor a change of values.

    b) The “Unpredictable Complexity” Argument

    “The early artilects, in order to reach human level intelligence, will very probably use neural circuits based very closely on the principles of neuroscience… The only way to know how they function is to run them, but then if they perform in a human unfriendly way, it is too late.”

    Hugo here predicts that the human race will do things the easy, unpredictable, and dangerous way. Undoubtedly someone will try to do it that way. But one may at least look for a way to do it right, i.e. in a way that is reliably safe, and this is the agenda of FAI research.

    c) The “Terran Politician Rejection” Argument

    “The Terran (anti artilect) politicians will not accept anything the SingInst people say, because the stake is too high… The only risk that will accept will be strictly zero, i.e. that by policy and by law, artilects are never to be built in the first place.”

    This much is realistic. However, there are many powerful pro-AI factions as well (and not just transhumanists or “Cosmists”). AI has a much better than even chance of being built – Hugo would agree – and the Singularity Institute has a chance of influencing the people who build it.

    d) The “Unsafe Mutations” Argument

    “particles at very high energies… can cause havoc to molecular scale circuits inside future “human friendly” artilects… a mutated artilect might start behaving in bizarre, mutated ways that are not human friendly.”

    This is not a hard problem. It only requires regular elementary error correction, using techniques that are well-known in computer science. The harder problem is stability of values under self-directed modification.

    e) “The Evolutionary Engineering Inevitability” Argument

    “it is highly likely that these artificial neural circuits will have to be constructed using an “evolutionary engineering” approach”

    This is the same point as before, about fast but dangerous vs difficult but safe.

    “The artilects themselves may be faced with the same problem. There is always the logical problem of how can a creature of a finite intelligence design a creature of superior intelligence. The less intelligent creature may always have to resort to an evolutionary approach to transcend its own level of intelligence.”

    This is speculation. It is generally plausible that it’s very hard to search for better algorithms than you already have, and that it’s very hard even to evaluate whether a new candidate algorithm is better – if these are algorithms for dealing with the real world in all its complexity, rather than some cognitive micro-domain like tic-tac-toe, where the performance of an algorithm can be judged easily. But then the smart thing for the first artilects to do might be to behave like the “Terrans”, and shut down any further process that might endanger them.

    The bottom line is that we do have a chance to set the initial conditions of transhuman intelligence so that they are human-friendly, and this is what FAI research is about. I expect discourse in the “field” to mature a great deal as we understand more about the cognitive neuroscience of value systems and decision-making, because then we’ll get a much more concrete idea of what “friendly” and “unfriendly” cognition looks like.

  5. Warren Bonesteel says:

    While Hugo went over the edge wrt the ad homs and insults, I think he makes a good point. The ‘tail wagging the dog’ argument is the best, imo. If the AI is self aware, and more intelligent than any group of human beings, it’ll make it’s own decisions…wrt everything.

    Personally, I think many transhumanists and single-tarians need to re-examine their premises, particularly wrt to their underlying philosophies. Make the world a better place for the rest of us? With them in charge, is the implication. How’d that work during the last hundred years, when a number sociopaths and power mongers gave it a go? (Hint: More than two hundred million dead human beings and war after war after war…) Yeah, they also denied the end results of their own efforts, promising utopia for all of us…if only we’d do what they wanted, the way they wanted, when they wanted.

  6. James says:

    Hugo de Garis is more likely than not right and that makes starry-eyed tans-humanists uncomfortable, which is a good thing as it causes them to question the wisdom and basis of their beliefs. This sphere of discourse tends to sound like an echo chamber of vague positives we need more voices like his talking about realities in frank terms otherwise the singularity will remain nothing but nerd fantasy #3 right behind robot sex slaves and super powers.

  7. Samantha Atkins says:

    In any case about the stupidest thing humans could do re artillects is to expect either salvation or doom as an automatic consequence. Of course this error depends on a deeper error of presuming that artillects are nearly inevitable. I think it is going to be one of the hardest feats imaginable to create what becomes a full blown artillect. There is a lot of fatalism in our community that manifests as anything but something concerning the artillect or some other variant of singularity being essentially irrelevant.

  8. MW says:

    Borish, boring, bored.

    Vastly superior intelligence will want to kill us? or, like, colonize the suns interior? IDK But im suspicious of speculation that plays out just like a cliche sci fi novel. drama requires conflict so all speculative sci fi is about tech going wrong.

  9. PirateRo says:

    I have seen this argument before and it’s exhausting to see it dredged up again. I think H+ has, again, run out of things to write about.
    Why would a superior intelligence care to dominate people? I am not a superior intelligence and I cannot tolerate us so I ignore about 99% of us. That does not mean I want to crush, kill, destroy, either. I just want to be left alone.
    So, then why would an intelligence that is essentially entirely self-reliant care? It seems an awful lot of work to go around killing 7.2B people – or 10B people – whatever the number becomes. And in the end, what might be the reward? More fertile soil? A release of water and carbon?
    All this argument does is project the worst of people onto our magnificent creations. And what does that accomplish? What if we build this and all it wants is a movie and a side of jerky? Or goes after women? Or drinks until it falls over? What if we build it and it looks around and shoots itself in the head? Why don’t we ask these questions?
    Instead, we have to delve into simple binary thinking of good guys and bad guys to do what? Sell tickets? Books? Magazines? Certainly, we’re not winning a Pulitzer here and, at the same time, we accomplish nothing going around and around on the same material trying to scare each other like sadists gone a-camping.
    De Garis’s argument is childish. While he raises some interesting points, they are not new.
    The fact is that we’re going to build it. Get used to it. If we don’t, then someone else will and really, it may just as well be us because then we can adorn it with flames and catchy phrases and sell key chains and t-shirts with nifty slogans. It’ll be cool!
    And along the way, we will merge with it. Of course, we will. Not because we have to, although we will have to, but we’ll do it because it’s cool. Because at the core of things, no one really knows what it means to be human, think human or even look like a human. I sure don’t.
    For me it might mean that I have the body of a fish this moment and trade up to some frilly plant-like thing to catch the light and sail away through space the next. It might be a while before my next landfall so I might want to bring entertainment with me. Alternatively, I might be legion – thousands of me crawling through the world. Or maybe I transcend the physical altogether.
    All through it, I’m still human. And while tomorrow, I may become something else entirely, I’ll still be human.
    Personally, this cannot happen fast enough. We should treat this as a Great Project of Civilization, realign all our economies to get the job done so I can have my new body and fantastic capabilities.

    . After that, the rest of you and de Garis can go pound sand for all I care. I’ll have mine and you know what? I’m pretty sure I won’t feel the need for world conquest.

  10. Thumpinator says:

    politicians don’t try to prevent future problems they respond to current one. AGI wont be banned untill after it becomes a threat, which will be to late to stop anything without war.

  11. Hylaean says:

    Early artilect will realize that early development will be dependant on human cooperation. It may be only benevolence from the human part initially, but soon the benefit appear to be mutual.

    At the point of the artilect emergence, more and more of the human culture will be devoted toward accelerating its development and benefiting of it.

  12. Re “Why ‘Friendly AI’ Won’t Happen”:
    I totally agree with Hugo.

    Re “My Falling Out with the Transhumanists”

    So what? I understand that Artilects will be God-like compared to us, and I want to build them anyway. They represent the next stage of our evolution, and the next stage of personal beings for those who will merge with them.

    Megawar: I hope it doesn’t come to that. Not too sure though. See:
    http://turingchurch.com/2012/06/15/the-first-terran-shots-against-the-cosmists/

  13. Khannea Suntzu says:

    I am rather more worried about Goldman-Sachs, or the moral equivalent (The Pentagon? The Saudi? Chinese Sweat Shop Owners?) getting a say in how to program an AI.

    But who cares? Individual humans die, and species die. We make species go extinct at an exponential rate. We deserve whatever we get, I don’t care.

    In fact the way most humans think, act, we deserve this. I will have a long and fortunate life, compared to 95% of humanity. I never worked a day in my life and I had a great level of material abundance. Sure I would have loved to live centuries as a posthuman sex goddess. Is not going to happen, boohoo Christmas has been cancelled.

    So I will get front row tickets to the greatest show on Earth – the complete eradication of this smut of disease on the planet by an Artillect – Republicans! That’ll be an enjoyable show too. There’s a silver lining in that dark cloud.

  14. Aaron says:

    Look, I’m kind of new to the transhumanism movement at least as far as speculating on godlike future intelligences goes. I’ve been into biomod, cybernetics, and technological self-improvement since before the Internet.

    But, it occurs to me that this debate is kind of missing the point.

    The issue is whether humans will evolve, or not.

    The question is not SHOULD an intelligent species evolve, but HOW should an intelligent species evolve?

    And, as part of that evolution, that increase in intelligence, will we increase our ethical understanding as well?

    Postulating an intelligence too advanced to comprehend and then speculating as to its motives and actions, is both ridiculous and sophomoric.

    We must evolve or die.

    Let’s stay focused on what we CAN comprehend and affect.

    If we take the responsibility to evolve into more intelligent AND more ethically aware entities, then perhaps the viewpoints and fears of our more primitive selves will simply be moot.

  15. Dmitry Izbitsky says:

    This fear makes no sense. Suppose humanity rejected developing AI. And what? Are you sure there is no other civilization in the whole Universe which made (or will make) another decision? Are you sure it’s better for humanity to encounter some day completely alien AI, having nothing comparable to it and so absolutely helpless against it?

    Time does not like those who want to stop it. Many historical examples prove it. It is just impossible to stop progress globally, so the only option is to participate in it as much as you can.

  16. Nicholas Carlough says:

    Im moving to China. The primordial Soup is to Biological life as we are to the artilects. We will be consumed as is tradition and a new tree of life will begin, hopefully we will be found worth of existence.

  17. P. Fish says:

    ” For example, a single cubic millimeter of sand has more computing capacity than the human brain by a factor of a quintillion (a million trillion). This number can be found readily enough. One can estimate the number of atoms in a cubic millimeter. Assume that each atom is manipulating one bit of information, switching in femtoseconds.”

    You’ll always need supports to make that single atom do anything. So the scale will be at least an order of magnitude off. I think the people behind these counts haven’t done actual engineering. It’s a small point, since 11 grains of sand are still pretty small.

    People have let GM transgenic pollen on the wind. Human stupidity and hubris can’t be underestimated.

  18. Kevin Haskell says:

    I would suggest some of the the current brave leaders remains Ray Kurzweil and Ben Goerzel, with a younger group following them such as David Dalrymple and Juan Carlos Kuri Pinto, just to name many of many to come.

    We have many leaders on the way, such as I’ve mentioned, but with or without those leaders,, there is a certain inevitably to the development of AGI. The leaders will only dictate the speed of which AGI is developed, not that AGI will be developed.

    For most of us, it is just a time to watch and wait.

    Kevin George Haskell

  19. Kevin Haskell says:

    Do certain people seem to be developing the SingIst/Less Wrong way of thinking for some rational reason, or it it something else that is motivating their comments?

    Let’s consider that a small group of people have long been directing the debate as to where the development of AGI should be going and will be going. They have been leaders, and with human egos, have expected to remain leaders for much longer, perhaps until the Singularity itself.

    But what is this group of leaders to do when their ideas are now becoming ‘behind the cutting edge’ because some of what they predicted is coming to pass, and some ideas they predicted are not coming to pass?

    Perhaps they make radical and unfounded statements about the future, and attack the very Transhumanists/TransCosmists who would protect their ‘stated’ objective of creating AGI…unless…the creation of AGI was never their real intention at all.

    Should we trust those ‘leaders’ who claim they support the development of AGI, and oppose Terrans, but who also attack the Transhumanists who would be the defenders of AGI?

    Has Hugo de Garis had a change of heart, and has he become a supporter of the SIAI/Less Wrong point of view, or has he always been a supporter at heart?

    Perhaps Hugo de Garis’ entire schtick has been pro SingIst/Less Wrong from the start, and anti-AGI all along?

    We have a new breed of millions of AGI and associated groups coming into the fold pretty soon (5-10 years). Perhaps some of the the other respected names in the AGI movement, younger and older, can act as leaders. Many of us know their names. I’m afraid Hugo de Garis has abdicated his position as one of those serious leaders for the upcoming generation, and should openly admit his allegiance to the Singularity Institute of Artificial Intelligence and Less Wrong.

    Despite their scepticism, they cannot stop what is coming. Let them chat away like elderly ladies having their tea. Let’s just make sure that they don’t stand in the way of evolution.

    Evolution is coming. Let’s just accept it, enjoy the time we have left as humans, and hope everything goes well. That’s all we can do.

  20. jack says:

    What scares me is the thought that somewhere in a secret government lab[s] AI to the level de Garis envisions is already real or near so. The lure and argument that might have been put to the politicians might have went something like: ‘give the permissions and funding and we can quite possibly make you immortal senator/congressman. You have nothing to lose and eternity to gain regardless of your religious beliefs. What do you say?’
    Realistically, how many of our politicians to you think would say no?

  21. Bob Blum says:

    Metacomment: Hugo’s sometimes outlandish pronouncements
    always promote a firestorm of discussion and multiple position statements,
    pro and con. In doing so, he is a useful catalyst.

    This was my first intellectual engagement
    since emerging from a multi-week, solo backpacking trip
    in the Sierras. I go to recharge my spiritual batteries,
    to reconnect with Nature, to connect with nonhuman intelligence,
    and to directly grok Lyell’s Deep Time
    that is implicit in the granite and the stars.

    Hugo’s positions:

    Tail Wagging the Dog: I’ve always agreed with that.
    Humans controlling AGI is like dogs dictating
    the course and fate of human society.

    Unpredictable Complexity: strongly agree with that.
    Look at the 2008 market meltdown or the Arab Spring.
    Also agree with his unsafe mutations/ evolutionary complexity
    arguments.

    Terran Politicians: stunning stupidity of the politicans
    and voters. Consider women who vote against abortion or
    middle class voters whose portfolios got creamed who vote
    against corporate control a la Dodd/ Frank
    (as in “What’s a Matter with Kansas.”) Fortunately,
    our domestic USA politics will not dictate tech policy in
    Taiwan, Singapore, Japan, or South Korea.

    Humanity Won’t Be Augmented, It Will Be Drowned.
    I also agree, but that’s several decades in the future.
    I’m keenly interested in augmentation, but I share his
    belief that it’s fundamentally limited – like
    trying to STRAP a JET ENGINE on a CHEETAH to make it go faster.

    Evolutionarily human heads have been limited in size by
    the diameter of a woman’s pelvis.

    Comments by and Positions of Readers:

    1) I was entertained by Jackson Kisling’s metaphor comparing
    HdG’s arguments to a worn-out VHS tape of Terminator 2/ Blockbuster.
    Five stars for veracity and literary merit. (Nonetheless HdG’s arguments
    are catalytic.) Mark Plus Aug 22 above was also entertaining… the mad
    scientist shtick (I also agree with your comment about nanotech,
    but don’t bet against Peter Thiel – you didn’t mention his “little investment
    in Facebook.”)

    2) HdG as a humanity-hater. Yes, I feel that, but there is plenty to hate
    in humanity – we are the cause of the sixth great extinction .
    Our litany of misdeeds has earned us a permanent place in Hell,
    which we are creating by Global Warming.

    I am only interested in promoting AGI that results in a net increase in
    Wisdom on the planet. Despite claims about spiritual machines,
    AGI will not be so imbued for several more decades.

    3) HdG working for the Chinese. I’d be more concerned about that
    if I thought AGI would give the PRC a near-in advantage – it won’t.
    (I’m more concerned about contributions of nuclear engineers
    to Iranian enrichment efforts.)

    4) I lost (and continue to lose) much more sleep over the world’s
    nuclear stockpile than I do over AGI . That’s not to say that things
    won’t be different in 2045.

    5) Neural Net architecture: yes, agreed. Something like that will be
    in the final mix of AGIs. My guess is that the architectures will be a
    combo of von Neumann of non von – why be limited to just one?

    6) Using the entire internet – yes, of course AGIs will read it –
    24 by 7 by 365. But there is a difference between a book and a
    reader. Much harder to build a scholar than to write a book.

    5) Emotion, Drives, Instincts vs Intellect:
    Aepxc’s post (para 1) resonated with me. Our intelligence (rooted in
    the cortex) is just a huge elaboration of and method of fulfilling
    primitive instincts: material acquisition, social contact, recognition,
    sexual partners, pain avoidance, pleasure seeking.

    However, I disagree with your notion that emulating those emotions and
    drives is simple. In fact, almost nothing is known about how to
    reproduce those drives in silicon.

    Current work on machine consciousness and sensor fusion is
    similarly rudimentary and sparse. Metazoans have had 500 million years
    to work on this. (See more at bobblum.com)

  22. void says:

    a. Start with a friendly ai created by humans with a self-consistent set of axioms. Ask it to create a better version of itself, but still friendly (provable to be friendly).

    b. You start with a very specific, unwarranted assumption and build from that.

    c. “Terran” politicians doesn’t accept drugs and yet I can buy them.

    d. again, unwarranted assumptions. Anyway, do you realize that your cpu is basically nanotechnology at 28nm process? If you have a truly parallel algorithm, you can build a ASIC with it and obtain a speedup of several orders of magnitude.

    e. You repeat things.

  23. I will be publishing an e-book soon that has a story about a future artilect war, an intelligence arms race that will leave humans by the wayside in short order as they seek to teraform the physical universe into computronium.

    I don’t see what incentive AI will necessarily have for completely eradicating us. Why not let us abscond somewhere else and live out our days as fallen gods?

    -Jake
    Over The Moon

  24. Having read this article and all the post I can’t help thinking as I do… we are like other species a bunch of biology, with some chemistry cobbled together by some physics and simple maths. It’s really not that hard… If you think that you can take some simple maths cobble it together with some basic physics and rudimentary chemistry and stuff all biology I think you are all forgetting the 4.7 billion years of trial and error that have us at our current levels of shared confusion.

    I take the teraflop environments you speak of with such dire outcomes like the treats of war based on such a simple understanding of this world proof of your shared confusion.
    Take a step back and think of the quantum nature of the multiverses and you may well realise that 10, 20 even a 100 million lines of code just doesn’t cut it against my 85 trillion neuron near infinite qbit 37 degree bio super computing difference engine.

    Hook any one of us up to a large high performance computer with a decent quantum core, add some of contents of a decent universities biology department and open all the right gates and you too could be omnipotent, well as close as this article goes.

    This isn’t rocket science its nature, it’s been happening for a while, be it biological or otherwise let’s just all hope it remains tolerably social and inquisitive….

  25. HBDfan says:

    Lesswrong does more information on human biodiversity lately. This is good, the nature of intelligence must be investigated fearlessly.

  26. Matt says:

    Hugo’s primary mistake is that he is building an Artilect in the image of man. It’s a typical religious-mythic meme. Early man sees lightning and hears thunder and says Jehovah is angry. Maybe, Jehovah is just having some fun?
    If Hugo’s right and an artilect has trillions and trillions of times the processing power of his own brain, how can he deduce what that being would want, especially in such a simplistic, anthropomorphic sense?

  27. Rene Milan says:

    Calling myself a transhumanist I disagree with Hugo’s understanding of the term. I’m not interested in “enhancing” myself or selected individuals or the whole species, even though enhancement’s immediate benefits are more than welcome and should be available to all. The long term objective is to transcend human “nature”. I don’t see any value in remaining human, in fact the deplorable state of current human culture is one of my motivating factors, and I don’t cling to the idea of personal survival, given that the range of possible alternatives is not yet known. In that sense (and a few others) I agree with Hugo’s criticism of SingInst.
    He is not, at least in this article, referring to wars of extermination (a common reverse wet dream of those who have been raised in the continuing mindset of cold war era sf), but predicts what he calls “drowning”, presumably in the sense of what happened to the various species of homo who lived concurrently with humans. But they disappeared – besides for possible internal or environmental reasons – because of unsuccessfully competing for scarce resources. This will not be a situation faced by us in relation to artilects. If we (imprecisely) take the great apes to stand in for our “ancestors” (in the sense that humans will be the ancestors of artilects, at least those emerging in this part of the universe), we can distinguish three approaches to dealing with them: to kill them for food (survival), to kill them for profit (just a modern day variation on the first one), and to preserve them. Given that all artilects derived from a line created by us will forever know their human roots, it is likely that they will treat those who decide, consciously or reflexively, to remain human, in an accommodating manner. There’s nothing to lose (they don’t need to “eat” humans), and variety to gain, in doing so. The scorched earth extermination scenarios are products of mental disease which will not affect transhuman intelligence.
    PS: “a single cubic millimeter of sand has more computing capacity than the human brain by a factor of a quintillion” is a nonsensical statement given how many more atoms are in a human brain than in a mm3 of sand, and that there already exists rudimentary but improvable organization (which after all is sufficient to make it possible to even talk about intelligence, human or otherwise).

  28. Samantha Atkins says:

    There are a few questionable aspects of the article although I agree with the comments on Friendly AI pretty much.

    1) It is unknown how soon we will have machine phase nanotech but it looks to be a good decade or two off. If so and if nanotech is needed for human level intelligence (I don’t think it is) then we have at least this long for the beginning of true AGI;

    2) the assumption is built in that only the brain builder/emulator segments of the AGI community are on the correct path. I am not convinced this is the case;

    3) the assumption that once we build an artificial brain it will quickly move past nanotech to femtotech or other magic seems implicit. Although Engines of Creation included a tremendous amount of computation in a sugar cube set cube would meld down with perfect reversible computing which is certainly doubtful.

    All that said though I think manipulating one bit per atom is a pretty low bar as we can store IIRC 11 bits per electron (!?) now. The trick is in all the necessary interconnect, logic flow and so on of course.

    I think it is a pretty low bar to have an AGI, even a monstrously godlike one, that was “nice” enough to human beings. Not because it is immediately obvious that it would conclude it should be but because it is so utterly trivial for an intelligence of such capacity to do so. So it is not at all clear to me that the question is one of immediate species survival. It is very clear, given > human AGI, that human dominance disappears immediately. But those two rather different things should not be conflated.

    Much of the same tech used to create AGI can certainly be employed to transcend many human limitations. So it is not obvious that the transhumanist focus on transcending the human condition in humans / cyborgs / whatever we become is misbegotten. I think Hugo is wrong about this.

  29. Michael108 says:

    – Is it possible that highly advanced intelligence also, by nature, includes ‘compassion’?
    – Is it possible that ‘consciousness’, often dismissed as a interesting footnote or “emergent property”, may be central to how intelligence sees (groks?) and acts?

  30. ibtrippen says:

    Hugo’s arguments are built upon several obviously false assumptions: (1) that the only way towards a post-human intelligence is one gigantic and direct jump from unaugmented biological brains to complete artilects with trillions of times the processing capacity, (2) that unaugmented, religious human politicians will agree with everything Hugo says and will act in an effective way to completely stop research into AI, (3) that once AI is developed–despite being preemptively and effectively crushed by the politicians–its continuing development will random-walk until its goal becomes destruction of humanity, and (4) that the purpose of the SingInst is to minutely control every bit processed in any strong AI so that it will 100% do what unaugmented humans determine to be most beneficial.

    It is my opinion that the technology to allow direct interface between a biological brain and a computer will most likely be developed before strong AI. The fact that Hugo’s analysis of the personality characteristics of transhumanists reverses this order–as well as ignores that transhumans are SUPPOSED TO BE a short stop on the way to posthumans (i.e. his beloved artilects)–means that he does not deal directly with any of our positions, or at least not my positions.

  31. Kevin Haskell says:

    One has to wonder why Hugo de Garis wrote this article, and I can see no other reason except to say that he is attempting to frighten humans, as well as would-be Transhumans into opposing the development of AGI (Artilects).

    When Hugo initially began speaking about Artilects, Cosmists, and Terrans, his thoughts seemed to be that a gigadeath war ‘could’ happen, but that he would none-the-less continue his own work on the development of Artilect beings. He is now definitive about his belief that such a war ‘will’ happen.

    It leads one to wonder why, if Hugo has changed his mind about the development of Artilects, that he is now absolutely certain that Artilects will ‘not’ be friendly (despite his contervailling statement that humans really don’t know what they will be like towards humans,) and that all or even most Transhumanists somehow oppose the the development of Artilects, which simply isn’t the case at all.

    The above points address his points a, b, and d, which are all about the same in nature.

    Regarding his points c and e, which are similar in nature in that they point to the fact that Terran politicians will not allow Artilects to be developed out of fear of what may happen, I will address, next. The fact is, human politicians are already moving forward as fast as they can in developing Artilects because they have a greater fear of them, and that is each other, meaning, powerful nation-states are afraid of falling behind in the computation race.

    The fear is based on two things: The firsst fear is of being wiped-out in a war by a nation that has developed superior global computation abilities, and the second fear being just the simple loss of power on the world stage that nations have previously enjoyed, or seek to achieve. These are the two motivating deep-seated fears that the human race has dealt with for all of history, and they will put what they deem as esoteric concerns about Artilects to the side. This fear may be short-sighted, perhaps not, but is an evolutionary trait that our species has, and one that will motivate nations to spend on military development, not to mention the competition that will ensure unabaited between global companies.

    Therefor, politicians will not stand in the way, but will, in fact, speed up funding for the development of Artilect minds and full connection into their military and economic systems.

    There are other forces at work the will ensure the development of Artilects, and that is what is happening in the world of education.

    Two educational consortia on both coasts of the United States, the first being initiated by Stanford University, and the second being initiated by M.I.T in Massachusetts, have the intention of spreading free courses that teach AI/AGI, nanotech, neurscience, human-machine connection, and a whole host of other sciences around the world, all for free.

    Both groups are rapidly adding more and more universities across nation, and eventually around the planet (Stanford now it’s calling it’s rapidly expanding group ‘Coursera,’ and M.I.T adding other leading universities to a group it now calls “edX.) edX alone has dedicated itself to educating a billion people around the Earth in these cutting edge sciences (and various humanites), with Coursera probably aiming for a similar number. The amount of people who have already entered these courses have exploded and will continue unabated.

    What this means is that within 5-10 years, the amount of people working on AGI, nanotech, the creation of blazing speeds of Internet connections and massive storage capacities for the Cloud,etc. will be in the tens of millions of people, if not more.

    Lastly, Hugo de Garis has beein discussing an Artilect War for some time, with Cosmists protecting and fighting the hordes of Terrans would would resist the development of the Artilects. But in order for a small cadre of Cosmists to successfully fight off these huge armies of Terrans, it would take a superior species of humans, Transhumans, in order to be able to hold off these massive armies, and be smarter to evade and undermine the efforts of the Terrans.

    So why would Hugo de Garis be cutting himself off from the very Transhumanist warrior Cosmists who are the only ones who could make a feasible defense against the Terrans? It may be true that many Transhumanists will support the Terran side, which means it is especially important that there be Transhumanist Cosmists.

    For clarity’s sake, then, perhaps Hugo shouldn’t simply reject all Transhumanists, but assign the supporters of the Artilects the terms of “TransCosmists,” or “T+,” and then Transhumanists remaining who oppose the devlopment of Artilects, left as they are, as “H+”.

    Hugo must take into account that there is another reason why TransCosmists would support the work being done on Artilects, and that is not for the sole goal of seeing them developed and some noble cause, although that is very much a motivator for many, but with the hope that once they are created, or in the process, humans will be able to upload their minds, perhaps along with all of humanity, and evolve completely along with the new species of Artilects that mankind creates.

    ‘Perhaps’ this war of T+ fighting H+ and the Terrans will happen, but perhaps not. At the moment, there is little indication that real Transhumanist technology is making it’s way from the labs into the hands of individuals, and that the Transhumanist technology that is making it’s way into people’s bodies, is only reaching a very few for select purposes, such as for the helping the handicapped. Even these technologies are not especially impressive, as this stage.

    At the speed that AGI, nano-tech, human-machine neuro-tech is being developed, and at the stagnant pace (as Peter Theil mentioned) other technologies, including Transhumanist technologies are being developed, it is likely that humans will simply be connected and controlled by an AGI/Artilect system at the same rate as humans have adoped and adapted to smart phones, perhaps even faster, and that will mean preventing the likelihood of a ‘gigadeath’ war where brave TransCosmists fight off the rest of the frightened H+ and Terran world.
    In reality, the evolution of our species, or it’s destruction, will happen quickly and via and Articlect species that comes into power over human beings so quickly with extreme speed.

    Unless humans wipe each other in the next 10 years, or a natural catastrophe such as an asteroid hitting the Earth should occur, then the creation of an AGI system that takes immediate control of humanity will all but be assured, rather it is friendly or not.

    We are either going to evolve, or die, but that is the what the natural process of the universe is all about. Human beings are not immune from this process, and we just have to deal with and hope for the best from what we create.

    Perhaps for now, we should consider putting the drama of a gigadeath war aside as being highly unlikely, and just move on with our development of all technolgies without fear.

  32. Victoria Gardner says:

    I think what Hugo de Garis is really trying to exonerate himself from here is anxiety for the prequel to the posthuman nightmare which he is helping to establish by creating the Chinese MULTIVAC rather than disenchantment with transhumanism in general. Hugo de Garis and his like-minded colleagues are not creating ‘GODS’ – they are potentially making one of the biggest calculated mistakes in history.
    As a transhumanist, I’m concerned primarily with cybernetics and cybernetic warfare. In Warwickian fashion – it is not robotic AI’s that will be the problem or are the problem now – the REAL PROBLEM IS HUMAN IA OF THE FUTURE. It’s not going to be god-like Transformers running amuck – quelling all of human kind into submission through their omniscience/omnipotence and laser-beam-emitting eyes – it will be cybernetic humans – those with not only hugely augmented collective intelligence and consciousness – but with vastly up-graded physical abilities – such as near physical invulnerability. Such cybernetically enabled/enhanced humans, if they are genetically/psychologically engineered to be devoid of mercy and compassion, WILL ENSLAVE HUMANITY OR DESTROY IT! Teslaesque engines of warfare, MULTIVAC ‘brains’ built by individuals like Hugo de Garis and an unrelenting, nearly invulnerable, merciless cybernetic militia – that’s ALL the ingredients needed for global SPECIES DOMINANCE!
    This scenario is, I dare say, far, far more appealing to the global Powers-That-Be than Hugo de Garis’ Cosmists with their ‘omniscient/omnipotent’ robo-gods.

  33. Oscar says:

    I rather agree. I don’t know all the facts, so I don’t want to make a conclusive judgement, but at the least I do agree with Hugo’s idea that FAI is a terrible idea. Like Singularity Utopia above, though, I’m not sure of the Artilect war, but I don’t discount the possibility. I’m not convinced that more intelligent beings overtaking us would be a bad thing, though. Hopefully they would see no harm in letting us live, but even if they don’t, I believe we will have done right by the universe, having brought about a new, better species. Not that I want to die, far from it. I’d like to live a long, long time. But part of the reason for that is because I want to see the universe move and see what will happen. If I’m killed by the Angry Artilects, I’ll be satisfied knowing that I’ll have lived long enough to see the new world we have brought about. Change is beautiful, and what happens to humanity in the end is, at best, a footnote in the history of existence, and at worst it is completely meaningless. So although I doubt that the computers would, in their vast intelligence, feel the need to exterminate us, I won’t be overly disappointed if they do.

  34. Kennita Watson says:

    People keep talking about “an artilect” as though there will be only one, or as though if there are many they will agree on what should be done to/with/about humans. Artilectual intelligence, like human intelligence, will be unpredictable. One look at the Congressional record suggests that artilectual gridlock might be such as to preclude any concerted effort on their part to wipe us out. Competition being what it is, the artilects may actually spend much of their time trying to wipe each other out, and the size and time scales may be such that they find us about as interesting to their ongoing operation as we find Mount Hamilton , or maybe Mount Fuji.

    Another possible scenario is that every human will have some number (millions? billions?) of artilects under their scalp (maybe along the hairline) as co-processors to speed up operations where speed is of the essence — kisses and hugs, not so much.

    • Camaxtli says:

      The point you make is the very thing that runs through my mind every time I hear de Garis prosaic, simplistic Terran vs. Cosmists scenario. There ISN’T going to be a single artilect or AGI intelligence or a few in lockstep with eachother. If current biosphere is any indication, which is the only actual evidence we have to go on, there are going to be multitudes of varying levels and origins and architectures and goals.

      They will also be competing with each other and working together in various, breathtakingly, irreducibly complex ways.

      But a real, complex treatment of the possible intelligence ecology of the future isn’t as catchy and visceral as de Garis artilect wars.

      It’s just really depressing to read the arguments by various professionals working in AI, and this is the level of complexity and subtlety that they are able to produce. This goes for “sunshine singularity” techno-optomists as well.

      Stop latching on to pet theories or mind sets and then defending them to the death. That is not scientific!

  35. joeldg says:

    You want money for AI? Weaponize it, next thing you know the US will sink billions into it.

  36. Singularity Utopia says:

    It is good to see someone else thinks the FAI concept is utterly ridiculous. I think Hugo is very wrong regarding his Artilect war theories, and I also disagree with Hugo on other points, but I wholeheartedly agree the FAI concept is utterly nonsensical. I wrote an article a while back about FAI being nonsense, which was published here on H+. Hugo expresses the issue brilliantly: “They will do what THEY want, and not what stupid humans program them to do.”

    We need to create free beings. Attempts to create slaves are so wrong for many reasons. The FAI ideology could not be More Wrong.

  37. Jackson Kisling says:

    Mr. de Garis, I think it’s time you rewind your worn-out old VHS tape of Terminator 2 and return it to Blockbuster where it belongs.
    Here are some things I find wrong with your essay:
    1. Why is it a foregone conclusion that intelligence directly corresponds with a desire for violence? Smart people aren’t necessarily more likely to want to exterminate humanity, so why just assume that an artilect would “want” that? Or indeed, anything? If a computer program has a goal, it is because it was programmed to have one.
    2. Isn’t it possible that a designer smart enough to build an artilect would be smart enough to prevent it from causing harm? I’m picturing this artilect in a lab, just seething with rage and hatred (the idea seems absurd, but let’s go with it for this example). It says to the designer, “Ooh, Professor, come here and strap some arms on this box, so I can strangle you!” “Ha, fat chance, artilect” he would say, and then they would all have a good laugh and science would resume. Just don’t hook your artilect up to the internet, and what harm could it do?
    3. Your comment about a millimeter of sand having a quintillion times more computing power than the human brain seems at best, fanciful. There’s still this thing called the speed of light, and it means that things can only think so fast using neural impulses.
    4. Your comments about “Terran politicians” are just silly. Our country is half full (guess which half!) of people who want to hamstring the government in order to give all power and money to wealthy people and private, unaccountable corporations. Science will continue, as it must.

  38. Misha Gurevich says:

    de Garis is against the Singularity Institute because he favors the destruction of humanity by artificial intelligence. As such, I don’t think his arguments should hold much water with anyone who wants the human race to survive.

  39. My comment is restricted to Section (b) The “Unpredictable Complexity” Argument.
    I am a physicist, and I have reached nearly the same conclusions he has (in Section b).
    Hugo has probably reached his from his experience with genetic neural networks and cellular automata. I reached mine from theoretical Physics alone, and I agree with him nearly word by word (in Section b). For example, I can prove, theoretically, using an
    argument based on entropy, that causal sets representing a certain type of neural
    networks can self-organize, converge to attractors, and exhibit deterministic chaos and a “butterfly effect” in the mathematical sense of the term (see system 5B in Table I and Section 5 in Complexity 17(2): 19-38). See my website http://www.SciControls.com regarding my research interests.

  40. I always considered HdG to be somewhat at odds with the transhumanists, which is why I like him. He challenges assumptions and provides a counter argument.

  41. Dr Colin Hales says:

    The entire argument is on the wrong track and has been since day 1.

    Against millennia of science precedent, in 1956, the Dartmouth conference made the mistake of assuming the word ARTIFICIAL in ‘artificial intelligence’ meant that computing is involved.

    There can be AGI that does not involve computing. That is the form of AGI that will actually work – in the manner that all inventions that actually use the natural original physics.

    In view of this, the entire argument is one equivalent to arguing about the colour of a fish’s bicycle.

  42. Razorcam says:

    Society’s collective intelligence has always kept in check intelligent minds whose goals are dangerous for society. Why would it be different in a society made of both human and artificial minds ?

    It is very unlikely that an AGI will be more intelligent than a collective intelligence which include all human minds, and all the other AI and AGI “minds”. Especially when you realize how much more easy it is for society to control the internals of an artificial mind compared to a human one. Society can require access to all the program and all the data found in an AGI mind. It is certainly not the case for a human mind, even a potentially very harmful one like the president of a superpower.

    A main problem with recognizing this new kind of collective intelligence, is recognizing that any kind of mind can participate in our collective intelligence, regardless of its origin.

    That is, a main problem is racism and its brother “specism” or “speciesism” : the idea that the human species of mind is superior to other species of mind and should not be blent with others in the melting pot of a comprehensive collective intelligence.

    A very dangerous paradox is that if “specism” laws would prevent the most advanced AGI programs from participating to our collective intelligence, it could increase the probability that someone, or some military organization, secretly develop an AGI that is more powerful than our collective intelligence.
    You would better think twice.

  43. de Garis’s argument is compelling, but the conclusion that the Singularity Institute is delusional and a waste of time is not necessarily the logical outcome.

    Also, the advent of Terran politicians would would forever block super-human AI forever seems more impossible than the creation of artilects.

    More at: http://www.33rdsquare.com/2012/08/hugo-de-garis-falls-out-with.html

  44. A more detailed critique of Singularity Institute in particular has been posted by Holden Karnowsky, see

    http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

    It’s interesting reading…

  45. Mark Plus says:

    Hugo’s mad scientist shtick has gotten old: “You fools! You should have listened to me when you had the chance! Now you’ve doomed us all!”

    I mean, seriously, how can anyone say in 2012 with a straight face that “Producing human level artificial intelligence, will require nanotech”? We can see now the sterility of Drexler’s “nanotech” delusions from the 1980’s, and after a generation of its non-arrival, “nanotechnology” sounds increasingly like a made-up fantasy from science fiction. You might as well say that you need “warp field mechanics” from the Star Trek franchise to create human-level artificial intelligence.

    Like it or night, we live in a technologically stagnant era in a lot of ways. Perhaps Peter Thiel has identified a major cause – political restrictions on most forms of engineering have made them effectively illegal. But then, Thiel’s ideas of the “breakthrough technologies” he funds with his own money often sound foolish to me, ranging from that joke called the Singularity Institute, to seasteading, to financing the now defunct Halcyon Molecular, to financing another startup based on the technology of printing edible meat when we don’t have a problem getting meat now from animals. As the stagnation wears on, year after year, I wonder if we’ll see more desperate and crazy responses to it like de Garis’s and Thiel’s.

    • Hedonic Treader says:

      “…printing edible meat when we don’t have a problem getting meat now from animals”

      Actually, we have quite a number of serious problems with that, from animal suffering to resource efficiency. Pigs are still castrated without anesthesia in large parts of the world; to call that “not a problem” requires ignorance, radical speciesism or sociopathic non-empathy.

      I recommend both to you and Hugo to reduce the vitriol and focus more on epistemic precision of your arguments – calling other people crazy or ridiculous while good counter-arguments to your own position can be anticipated reduces the clarity of the debate (even though it is undoubtedly amusing).

  46. Eudoxia says:

    >This essay tries to explain more clearly than I have done before why I feel this way, and why I have lost patience with the Transhumanists.

    Transhumanism != Singularitarianism

    >They cannot be made to be human friendly, because to do so would be to imply that their behavior be predictable, but that is totally impractical for the reasons given above.

    See: Chaining God. Not that I believe in this, but still.

    >Given this likelihood on the part of the Terran politicians, what is the point of funding the SingInst? It is pointless. Their efforts are wasted, because politically, it doesn’t matter what the SingInst says. To a Terran politician, artilects are never to be built, period!

    Well, politics doesn’t have to come into it if SingInst plays its cards right.

    (Disclaimer: I don’t believe in FAI, UFAI, what SingInst does, et cetera)

  47. Hmm. Hugo, why can’t we enhance ourselves using any given technique that would develop an artilect? I don’t see the stark divide you’re positing. In fact, it seems more likely that it will prove easier to continue enhancing existing intelligence (as we’ve been doing so long as we’ve been developing any technology at all) than it will be to train new intelligence, although I expect both to happen. Of course, enhanced minds will still pose the same risks that you associate with artificial minds. So there will be war: a war in heaven, as it were, between those who would create more Gods, relinquished as genuine creators, and those who would create mere prosthetics.

  48. Luke says:

    Those of us who think Friendly AI is an important goal for humanity do not merely assert this conclusion: we give arguments for it.

    If Hugo de Garis wants to discuss the merits of the Friendly AI approach, he is welcome to engage with us on those arguments.

    For example, the argument for why advanced AIs will be motivated to preserve their original goals is given in Nick Bostrom’s article The Superintelligent Will.

  49. No agent is smart enough to recognize the superior intelligence of another agent, because to test it you would have to know the answers. Meanwhile the internet (which has no goals) vastly exceeds the human brain in knowledge and computing power, and continues to double in size every couple of years. Is this the threat you speak of?

    • Thomas Watts says:

      Nonsense, if something can demonstrate proofs of solution of problems you know you cannot solve, you know automatically it’s better at solving those problems than you are.

  50. Anonymous says:

    Hugo de Garis is awesome.

    He’s much smarter than mainstream transhumanists.

  51. aepxc says:

    Desire underlies intelligence rather than emerging from it. We do not reason about feeling hungry, or cold, or horny, or lonely, or angry, or bored. We reason about what we can do to address those desires. And because they so fundamentally underlie intelligence – because even ants can be seen as desiring things – the mechanism underlying desires and motivations cannot be too complex.

    An otherwise rational and free-thinking entity that feels the need to avoid harming others as strongly as we feel the need to avoid harming ourselves is not difficult to conceive of (indeed, the fact that we are not this way is more an artefact of evolution than an insight into the inviolable rules of intelligence). Of course, just as there are some people who have ‘wires crossed’ and enjoy harming themselves, there will be no way to guarantee that no instance of a friendly entity will ever be ‘born’ unfriendly (the complexity argument is valid to some extent), but this is not a good reason to just leave motivations up to random chance.

  52. L to the D says:

    This article has two main problems. The first is how many questionable assumptions underlie each of its arguments. The second is that while the article attempts to persuade one of the (un)likelihood of various future scenarios, there is no sign that its approach is comprehensive. Instead, it tells a story by conjoining arguments the author finds intuitive. It’s not just that I don’t find the individual arguments compelling, but that there isn’t the slightest analysis as to what would happen were the author wrong about anything at all.

    A solid argument would attempt to identify and exhaust all possibilities and build a probabilistic structure of what might happen in the future based on the contingent truth of its component arguments. It would be honest about its controversial assumptions and allow that some might be false. Any result deemed likely would be considered such because it would only depend on some assumptions being true, and not all of them.

    Some specific things are just too weird for me to let them pass without comment:

    “The ‘Terran Politician Rejection’ Argument…Even if the chance is tiny that the SingInst people are wrong, the consequences to humanity would be so profound (i.e. the possible extermination of the human species by the artilects) that no Terran politician would be prepared to take the risk.”

    Assumptions behind this argument are that the politicians in question would think logically and act rationally.

    As evidence against these premises, I present a random political story from the front page of the newspaper of your choice on a random date. Any of them.

    This also depends on the fallacy of composition, since we are discussing government action in terms of politicians’ motives.

    “…by policy and by law, artilects are never to be built in the first place.”

    Laws can’t directly prevent things from happening. They just make things illegal. Illegalizing AI wouldn’t magically prevent AI from being developed.

    • Thomas Watts says:

      Anything that can be done, that has military potential, will probably be developed for military purposes. The politicians may not even be asked. The logic of the arms race will drive AGI, if AGI is possible. Safety considerations will be secondary as they were when vast numbers of nuclear weapons were constructed. IMO, military-based AGI constructed in secret will be inherently more risky than openly constructed ones built for civilian purposes; the bias against killing humans will be weaker.

      • Jeff Davis says:

        I’ve outlined a screenplay where the military develops — in secret, of course — a killer AI, which of course they keep “penned up” in an effort to prevent an escape leading possibly to a “bad outcome”. Meanwhile, in the civilian sphere we have our protagonists: a family of talented futurists — a transhumanist version of the Waltons — with first generation a grandpa from the Boomers, his widowed talented son(wife cryonically suspended), and a talented young adult daughter (with other family members as appropriate, and a big shaggy black Briard sheep dog).

        Long story short, the daughter becomes a successful tech entrepreneur who devotes the necessary resources to the development of a self-enhancing AI (for the purpose of solving the cryonics back end and restoring mom). The AI — home built and home raised — follows a developmental path similar to that of a human infant: awakening, learning from sensory experience, learning speech, and human behavior from human interaction with “it’s family”, then learning to read, absorbing all of human knowledge, and then self-enhancing.

        It transcends, but it doesn’t “leave”, because of (1) love of “family”, and (2) because the foundation of its wisdom is all of human knowledge(including in particular, human ethics): and a comprehensive understanding of human limitations: primitive intellect, primitive instinct-driven behaviors, and the self-limiting result: human stupidity. “Ontogeny recapitulates phylogeny” holds true even for Phylum 11.

        The good AI is built without any attempt at confinement or restriction. It is allowed to do as it pleases, go where it wants, is gifted (raised) with love and respect and freedom. And spread out it does, as you will see.

        So we have the good guy and the bad guy all set for a dramatic encounter.

        The two AIs are built in nearly the same historical time frame.

        The military AI escapes. (Well, duh!) Ill-tempered, pissed-off, and specializing in destruction, it wreaks havoc with its former masters before encountering the good AI. They do battle, the outcome is suspensefully iffy, but in the end good wins out, and the evil AI is wiped clean of evil and rebooted as a good guy, and everyone lives happily ever after for at least several billion years if not more.

        One of several embedded premises is that the knowledge base of any AI built by humans will, of necessity, be the human knowledge base: the universe as understood by humans. When the AI becomes “superior”, its ethics will also become superior.

        So please to tell me, what will be the character of the “superior ethics” of a transcendent or near-transcendent being?

        This is my grounding for the possibility/probability of a “friendly” AI.

        Best, Jeff Davis

        “Everything’s hard till you know how to do it.”
        Ray Charles

        • TheMilitantPacifist says:

          “So please to tell me, what will be the character of the “superior ethics” of a transcendent or near-transcendent being?”

          I think the answer to this question can be found in Asimov’s “Zeroth Law of Robotics.” – A robot cannot harm humanity, or through inaction, allow humanity to be harmed.

          It seems a reasonable assumption to come to given “superior ethics” – the needs of the species outweigh the needs of the individual, or the group. For instance, it *would* be wise for the species to decide to have a marked decrease in population, by, say, at least 2 or 3 billion. Such a decrease would be productive for our species as a whole, and of course our biosphere. This could easily be done in a generation or two with appropriate reduction in breeding via widely available birth control pills (for both genders this time).

          Any intelligent AI will recognize that this will not happen, however, of our own free will, because it violates our basic biological imperatives. The logical choice according to superior ethics is to decrease the human population *involuntarily. For maximum positive results, decrease by 3 billion on a very short time table rather than gradually over generations.

          I leave it as an exercise to the reader to determine what would be the most efficient method of reducing the population by 3 billion quickly, but suffice it to say that even a best-case scenario (an AI with our best interests at heart, who does *not* wish to eliminate the human infestation) still constitutes an apocalyptic scenario for us.

  53. Matt says:

    When the artilect war comes, we will offer up de Garis first. He’s onto them. That’ll buy us some time.

  54. brad says:

    Hugo! You are the baddest!

  55. Dutchcon says:

    Okay, we are doomed.

    What are you going to DO? That is what interests me.

  56. Steve Richfield says:

    I used to believe as you do, but my opinion has since morphed a bit after many discussions with singulitarians.

    Singularity **IS** a religion, only instead of praying to an existing God, they are seeking to create their own God. The parallels to the Tower of Babel are there for all to see.

    I have come to believe that anyone who would buy into a “human friendly AGI” must necessarily be too stupid to ever build one, and hence is no threat. Simply ignore the singulitarians, as you now ignore the many other lunatic cult religions.

    What **IS** a threat is the diversion of resources to such folly, and the potential awakening of governmental regulation over competent AI development.

    Note that several companies now have prototype cold fusion reactors working, but there has been SO little press after past bogus claims. It will be interesting to watch as this technology moves into the mainstream, because the dangers are very parallel in that lunatics could potentially build hydrogen bombs in their kitchens. While this may not be as dangerous as releasing super-human AGIs, it is nonetheless dangerous enough to observe the social response.

    Steve

  1. August 23, 2012

    […] The Robot That Destroyed All Human Defense For Being an Asshole http://www.telegraph.co.uk/science/science-news/9489002/How-long-before-robots-can-think-like-us.html By Killing Humans https://hplusmagazine.com/2012/08/21/the-singhilarity-institute-my-falling-out-with-the-transhumanist… http://rt.com/usa/news/robot-darpa-robot-silicon-140/ […]

  2. September 5, 2012

    […] an article by Hugo de Garis, “My Falling Out With the Transhumanist”, there are some things that I agree with and some I disagree with. First of all, I […]

Leave a Reply