H+ers and the Artilect : Opinion Poll Results

As I mentioned in the introduction of the first in this series of opinion poll essays, I have been complaining for years in the media that the level of optimism of people like Ray Kurzweil, concerning the rise of massively intelligent machines (artilects = artificial intellects) this century, is irresponsibly high. To counter this (in my view) excessive optimism, I came up with the idea to use opinion polls with the general public, to benefit from the “wisdom of the crowds.” If the “pollyannists” could see that a substantial proportion of humanity thought that the negative scenarios should be taken seriously, then maybe they would tone down the level of their optimism and become more realistic, more balanced, i.e. more pessimistic.

So, in the second half of 2011, I started taking opinion polls, by creating questionnaires. This essay reports on the results of the third such poll. The results of the first two were given in the first “polls essay”.

The QUESTIONNAIRE

The questionnaire used in this third poll was identical to that used in the second. This questionnaire, in the actual format that was distributed to the people who filled it in, can be found here. On one side of the single page questionnaire were definitions of the three main philosophies concerning the species dominance debate. These definitions were needed so that people who were new to the debate could familiarize themselves with the main viewpoints. The other side contained the questions.

In December of 2011, I gave a talk to the Humanity Plus (H+) conference in Hong Kong, where I handed out the questionnaire. I handed out an identical questionnaire previously to a group of (largely) designers and architects (DAers) at the “Applied Brilliance” meeting in October of 2011 in Jackson Hole, Wyoming, USA.

I thought it would be interesting to compare the replies of these two groups (the H+ers and the DAers) since the H+ers are “committed” adherents of the philosophy that humanity should augment itself into super humans, i.e. “humanity plus” or H+. I was curious to see if such a selected group would differ greatly from a “non techie” group such as the DAers. Well, not unexpectedly, they did, as can be seen in the next section, but there were some real surprises that I did not expect.

Specifically, the DAers consisted of 42 respondees, 10 of them labeled themselves “Cosmists”, i.e. they believed that humanity should build artilects (“artificial artilects”, massively intelligent machines), 7 labeled themselves “Terrans,” i.e. they believed that humanity should NOT build artilects, and 9 labeled themselves “Cyborgists”, i.e. they believed that people should modify themselves to become artilects. 16 were not sure.

The H+ers consisted of 36 respondees, 4 of them labeled themselves “Cosmists”, 0 labeled themselves “Terrans,” and 29 labeled themselves “Cyborgists.” 3 were not sure.

Summarizing:
DA Respondees : 42
Males 24, Females : 18
Theists : 20, Atheists : 22
Youngies (<50) : 23, Oldies (>50) : 19
Cosmists 10 (24%), Terrans 7 (17%), Cyborgists 9 (21%), Not sure 16 (38%)

Raw Data

H+ Respondees: 36
Males 26, Females : 10
Theists : 3, Atheists : 21, blank : 12
Cosmists 4 (11%), Terrans 0 (0%), Cyborgists 29 (81%), Not sure 3 (8%)

Raw Data

Note the dominance above (in red) of the Cyborgists amongst the H+ers. This is not surprising, since Cyborgism is the dominant idea of the H+ers. They want to add the “+” to their humanness (e.g. greater intelligence, greater memory, immortality, freedom from disease, etc).

Notes on the Questionnaire

The questionnaire consisted of 19 opinions that respondees were asked to give a 1 to 5 score to. 5 meant strongly agree, 4 moderately agree, 3 not sure, 2 moderately disagree, 1 strongly disagree. The number of people who scored 5 or 4 were said in the percentages below, to have “agreed”. Those who scored 3 were said to be unsure (?). The number of people who scored 2 or 1 were said, in the percentages below, to have “disagreed.” For the raw scores, see here.

The results are formatted as follows, taking one of the opinions as an example :

Q10 There should be a law limiting the intelligence of computers and robots.
(H+: a 17%, ? 25%, d 56%)
(DA: a 02%, ? 21%, d 69%)
Comments: The DAers disagreed a bit more than the H+ers on this.

The opinion statement should be straightforward. (H+:a 17%, ? 25%, d 56%) means that 17% of the H+ers agreed, 25% weren’t sure, 56% disagreed. The abbreviations used were H+: for the Humanity+ conference attendees, DA: for the designers and architects of the Applied Brilliance meeting. If the two groups differed by more than 15 percentage points the percentages are given in red, for emphasis. The percentages are followed by comments that summarize in words, the main results of the opinion.

POLL RESULTS (at Humanity Plus Conference, Hong Kong, Dec 3-4, 2011)

Q1 Scientists should try to build computers that are smarter than people.
(H+: a 78%, ? 19%, d 0%)
(DA: a 57%, ? 21%, d 17%)
Comments: H+ers agreed a lot more than the DAers on this.

Q2 People should be allowed to implant computers into their bodies.
(H+: a 89%, ? 08%, d 00%)
(DA: a 55%, ? 33%, d 07%)
Comments: H+ers agreed a lot more than the DAers on this.

Q3 Highly intelligent computers will be risky to human survival.
(H+: a 56%, ? 25%, d 19%)
(DA: a 31%, ? 26%, d 38%)
Comments: H+ers agreed more than the DAers on this. It is scary that half of H+ers think this. See below, for general comments.

Q4 It is against God and nature to build computers smarter than people.
(H+: a 00%, ? 11%, d 89%)
(DA: a 02%, ? 12%, d 81%)
Comments: Both H+ers and DAers disagreed strongly.

Q5 Building computers smarter than people should be against the law.
(H+: a 06%, ? 14%, d 78%)
(DA: a 00%, ? 17%, d 79%)
Comments: About 80% of both groups disagreed.

Q6 It is against natural law to build robots that are part human.
(H+: a 06%, ? 03%, d 86%)
(DA: a 19%, ? 24%, d 50%)
Comments: The H+ers disagreed a lot more than the DAers.

Q7 A war between robots and humans is likely to happen in the future.
(H+: a 11%, ? 39%, d 47%)
(DA: a 14%, ? 38%, d 43%)
Comments: About 40% of both groups weren’t sure about this. Scary.

Q8 If superhuman robots are built, they may not care about humanity.
(H+: a 50%, ? 28%, d 22%)
(DA: a 38%, ? 38%, d 17%)
Comments: About half of the H+ers agreed with this. Alarming for humans.

Q9 No one should be allowed to implant a computer in his or her body.
(H+: a 03%, ? 00%, d 97%)
(DA: a 07%, ? 19%, d 64%)
Comments: The H+ers overwhelmingly rejected this.

Q10 There should be a law limiting the intelligence of computers and robots.
(H+: a 17%, ? 25%, d 56%)
(DA: a 02%, ? 21%, d 69%)
Comments: The DAers disagreed a bit more than the H+ers on this.

Q11 I am frightened about the possibility of robots taking over the world.
(H+: a 22%, ? 25%, d 53%)
(DA: a 21%, ? 12%, d 60%)
Comments: A few more H+ers did not disagree with this.

Q12 It would be a great achievement to build robots smarter than humans.
(H+: a 92%, ? 06%, d 00%)
(DA: a 50%, ? 31%, d 12%)
Comments: The H+ers greatly supported this by a whopping 42 percentage points.

Q13 There is a real danger that super-intelligent robots will wipe out humanity.
(H+: a 47%, ? 22%, d 31%)
(DA: a 07%, ? 31%, d 55%)
Comments: Half of H+ers agree with this. This is highly significant!

Q14 It is human destiny to build entities smarter than ourselves.
(H+: a 44%, ? 28%, d 19%)
(DA: a 48%, ? 14%, d 29%)
Comments: Almost half of both groups agree on this.

Q15 Scientists should leave the human genome as God and nature created it.
(H+: a 00%, ? 03%, d 94%)
(DA: a 17%, ? 29%, d 48%)
Comments: The H+ers utterly rejected this.

Q16 Genetic engineering should be used to cure diseases and improve crops.
(H+: a 89%, ? 06%, d 03%)
(DA: a 67%, ? 19%, d 07%)
Comments: The H+ers really wanted this.

Q17 Tiny robots should be built to enter the human blood stream and cure diseases.
(H+: a 94%, ? 06%, d 00%)
(DA: a 67%, ? 19%, d 07%)
Comments: H+ers really want this.

Q18 A species-dominance war (Terrans vs. Cosmists/Cyborgists) is coming.
(H+: a 08%, ? 44%, d 44%)
(DA: a 14%, ? 29%, d 50%)
Comments: Very few H+ers agreed, but more than 40% weren’t sure. But, what about H+ers replies to Q3 and Q13 ??! See general comments below.

Q19 Human beings and artilects can peacefully coexist.
(H+: a 64%, ? 33%, d 03%)
(DA: a 48%, ? 31%, d 12%)
Comments: Two thirds of H+ers agreed, but a third didn’t.

GENERAL COMMENTS

What particularly struck me about the results of this questionnaire was the apparent contradiction of the H+ers between “pro artilect” questions 1, 2, 5, 6, 9 on the one hand and “existential risk” questions 3, 8, 13, 19 on the other. I got the impression that H+ers prefer to build artilects even if it means that humanity’s welfare is threatened, or even if humans are exterminated by the artilects.

This strikes me as odd, since the basic philosophy of the H+ers is to “improve humanity” i.e. to improve the capabilities of humans. My impression is that H+ers are advocating to a lesser degree the “augmentation of humanity” than they are advocating the “swamping of humanity” by a vastly superior artilectual capacity, whether via pure artilects or advanced cyborgs. It looks as though the H+ers care more about becoming advanced cyborgs than they do about the fate of humans (and Terrans in particular.)

If this is so, then it seems likely in my view, that the Terrans (anti artilecters) will treat the H+ers (who are overwhelmingly Cyborgists) as much the enemy as they treat the Cosmists. From the perspective of a Terran, there is negligible difference between a pure artilect and an advanced cyborg. (Remember a grain of sand of 1 mm cubed, that has been nanoteched, with one atom manipulating 1 bit of information, and switching in femtoseconds, can outperform the switching speed of the human brain by a factor of a quintillion (10exp18), i.e. a million trillion times. Integrating just one grain of nanoteched sand into a human brain, would convert the human into an artilect (“in human disguise.”)

Another apparent contradiction I felt was between the H+ers answers to Q18 (on the likelihood of an “artilect war” between the Terrans and the Cosmists/Cyborgists) and their answers to “existential risk” questions 3, 8, 13, 19. It seems to me that if the H+ers really feel that these existential risks to humanity are as strong as they say, then it would be logical that the Terrans would want to go to war to stop these risks from materializing. Hence the likelihood of an artilect war ought to be judged higher by the H+ers. My feeling is that the H+ers have not fully digested the political implications of their answers to the existential risk questions. Perhaps this might change as the implications sink in, and the whole species dominance issue is more discussed in the media over the next few years.

As I mentioned in the first poll results essay, there is a lot of detail in these answers that merits deeper study.

A NEW BRANCH OF SOCIOLOGY : “ARTILECT SOCIOLOGY”

I repeat here the appeal I made in the first “poll results” essay, since I feel it is so important.

Given that the rise of the artilect will probably prove to be this century’s dominant global political issue, it makes sense to suggest that the sociologists and psychologists need to get interested in this huge issue and apply their respective skills to its elucidation.

I’m hoping that these two “species dominance” opinion poll essays will inspire ambitious young graduate students or young tenure track professors in these two fields to undertake more comprehensive and more scientific studies on the species dominance issue. Once enough studies of this type are undertaken, we will be able to talk about the establishment of a new branch of sociology or psychology, namely “artilect sociology” or “artilect psychology”. Once it is established, professors can write textbooks and teach courses at undergraduate and graduate levels on the topic.

Once the “wisdom of the crowds” is used in the “species dominance debate” (i.e. “Should humanity build artilects this century?”) then a more realistic, more balanced scenario of what is likely to happen can be created, instead of the naively and irresponsibly optimistic scenarios of the “pollyannists.”

15 Comments

  1. I find it rather comical that their is a belief that a war on robots will occur, because the reality is the war is already happening. A captcha is literally a way of stopping robots from attacking or stealing data.

    On a more abstract level, every programmer’s job is to increase efficiency and kill off jobs; whether they are conscious of what they are doing or not. People are already fighting for jobs against code. I think it is very naive to think that “a war on robots” is a physical war where there are physical machines walking around, this is a very primitive view of what the war on robots will be, and we can know this because of what is happening right now.

  2. “Artilect” as a word is an awful choice; It feels much like artillery. Even if it were not the case, it would be an uninspired word. Also, AI is shorter. Let’s not form words where it’s not needed.

  3. @AgreeToDisagree

    I believe that rather than “Every single opposing group could[should] do their own thing”, every individual should do their own thing. Zoning and segregating may not be something that an individual is willing to do, and would be a terrible thing to enforce. Perhaps you meant that we could allow them to segregate, under a mutual agreement, and I can understand that, but I don’t think there will be that sort of consensus.

    And as for no groups being sacrificed or dominate, I think that many of us(including myself) will agree with you, but that it will not be that simple. Though we can have laws making us equal, providing neither group a de jure superiority or dominance, I would like to emphasize that cognitive superiority is a de facto superiority that provides a predisposition for dominance, which is also the state of the modern world.

    All though, I don’t think it’s clear weather or not human augmented intelligence or solely artificial strong intelligence will come first. From an uneducated point of view it seem that:
    1. The advancement of AI is not clearly parallel to the advancement of computing.
    2. The advancement of brain to computer interface is dependent on our knowledge of cybernetics and neurology.
    3. There’s actually a good chance that our knowledge of AI may be dependent on our knowledge of cognitive neuroscience.

    This may be a bit presumptuous of me, but without more information on this available it leads me to think that the advent of augmented humans could almost coincide with the advent of strong AI. Meaning that there will be a cultural bridge between terrans and artilects that allow us to coexist more easily.

  4. Perhaps in the end the humans, cyborgs and artilects will end up “willingly segregating.” For example, for a cyborg or artilect, adapting to conditons on the Moon, Mars, other planets or free space is vastly simpler than engineering structures to allow unmodified humans to survive there. Just as tetrapods colonized the land whilst fish ruled the seas, the surface of the Earth may end up being a more-or-less “humans-only” zone whilst technological beings colonize space.

  5. Zone and segregate. With space commensurate to numbers. No groups should be sacrificed or dominate, and population controls on all groups should be applied where reasonable so that ‘war for space’ will not be necessary. Every single opposing group could do their own thing.

  6. I don’t quite understand what study can be done in absence of subjects of the study. Sure one can write some papers arguing that cyborgs won’t be different from machines.. But i even can’t call it an educated guess, same like you aren’t convinced by Kurzweil’s writings. And maybe there is technical answer for the threats of AGI. I have hope that we can achieve everything we want without fully independent generally intelligent machines.

  7. Indeed Joseph. No need to resort to violence. The future articlect(s) may not even be conscious.

    But we cannot help but use teleological language but it doesn’t apply. Even if they are conscious, it probably won’t be their intention to wipe us out, violently or otherwise. It’s just a fact we are made of atoms, perhaps put to better use doing something else for them.

  8. We should be very careful that this view does not turn into a “self-fulfilling prophecy”. It is better to treat the causes of conflict than the symptoms. Dominance is not the issue, instead we should always strive towards co-existence and manage hostile individuals as we already do, because it is just a negatively-biased assumption that enhanced intellectual capability by itself leads to war, it always depends on the individual in question and it’s nature/condition/circumstances/motivations and so on. I notice more overgeneralization than balance. Strong pessimism must not create a shield against genuine positive developments.

  9. Under q18 you inquire, “But, what about H+ers replies to Q3 and Q13 ??!” It sounds likely that what seems like a contradiction to you, is a product of the same ambiguity that was complained about in your previous article as to weather or not humanity being wiped out is a product of direct physical imposition(literally being attacked), or simply being wiped out in a way comparable to how the modern calculator wiped out(kind of) the abacus. I think that this misunderstanding may be the sole cause of your belief that “that H+ers are advocating to a lesser degree the “augmentation of humanity” than they are
    advocating the “swamping of humanity”, and that your view that “It looks as though the H+ers care more about becoming advanced cyborgs than they do about the fate of humans (and Terrans in particular.)” comes from what seems to be a misunderstanding that “becoming advanced cyborgs” and “the fate of humans” are necessarily two different things.
    I agree that there should be an active rhetoric about the dystopian directions of our future concerning artilects, but I also think to anticipate war in its simplest definition is to oversimplify the possibility of conflicts. Even in the modern world, people who influence public policy, banking, and other things that regulate how the world works, operate in a way that is too complex for many people to understand(perhaps because they just haven’t spent the time to understand it). I believe that artilects or any cognitively superior race, group, or individual would not resort to physical imposition, simply because it wouldn’t have to in order to achieve it’s goals.

    • The existential solution is simple. We need to design compassionate artilects.

      Moreover, compassionate symbiosis is more powerful than callous predation.

      Ten persons who do everything in their power to empower eachother are, together, vastly more powerful than ten persons who rip eachother apart.

  10. The average person doesn’t really believe in the singularity or AI.

    Even a person like me that supports the concept can sometimes doubt it will come soon or at all.

    An average person looks at a computer, sees how stupid it is or how dumb AI NPCs in games are and laugh at the concept of sentient machines in their lifetimes.

    So if true AI happens in the next three decades, if humans manage to not kill themselves before that goal is reached then people will only be aware of this development when it is made or almost done.

    Things will be in motion by then. What people doesn’t recognize is that AI is THE most powerful force in the universe. An intelligence that can improve itself is by far the end of history, at least as far as dumb meat sacks consider it.

    So the decisions will not be made by crowds but by the people who recognized that power and invested in it. What they will do with such power is anyone’s guess.

  11. Thanks for doing this poll. The kinds of existential threats to humanity the future holds is largely not discussed within the current zeitgeist.

    However, I think not specifying percentages likelihoods combined with the ambiguity of words make the questionnaire hard to interpret. Furthermore, we are good at fooling ourselves, so it isn’t surprising their are large contradictions. We are talking about species extinction as a real possibility. Most of the H+ crowd knows the real threat is there as there is no sky god to protect us.

    Perhaps consider a more exact method for your next survey. I would be fascinated to see a study of existential risk done in the context of god belief. Do theists believe god will save humanity from extinction? Most of the world believes this. Dark days are coming…

    • Well, if you read Revelations, God is not g,oing to save humanity. He is perfectly content to let us destroy ourselves.

      As for the contradictions of H+ers, not really surprising. From most of the articles and comments I have read on this site, it seems obvious there is contemp for the current formulation of humanity.

      Finally, “did not disagree”? Really, a double negative?
      I really do not dislike good grammar.

Leave a Reply