Stuart Hameroff’s Bogus Singularity

What is the cause of the Singularity?

According to Stuart Hammeroff — who is a professor of anesthesiology and psychology and also director of the Center for Consciousness Studies at the University of Arizona — the Singularity marks the moment when machines become conscious. In his lecture, “A New Marriage Of Brain And Computer,” Hameroff describes the Singularity in terms of a belief that Moore’s Law will continue long enough for a computer to match the computational capacity of the human brain (commonly estimated at around 10^16 computations per second) and when this threshold is reached, human brain function, including consciousness, will occur in computers.

This raises a couple of questions. First, does anybody really believe that consciousness will just happen as a result of computers reaching some threshold of raw number crunching? It is not too hard to find critics who think this is indeed the belief of Ray Kurzweil, but a careful reading of his arguments shows this is not really his assumption at all. In his book, The Age Of Spiritual Machines, Kurzweil repeatedly states the following: “Adequate computing power is a necessary but not sufficient condition to achieve human levels of intelligence.” Rather than believing computers will automatically match human levels of intelligence because Moore’s Law has brought our machines to 10^16 cps and beyond, Kurzweil argues that increasing hardware capacity plus an increasing ability to reverse-engineer the structure and functions of brains will enable us to create conscious computers.

Hammeroff believes current dogma in brain sciences seriously underestimates the complexity of the neuron. Instead of thinking of neurons, synapses, firings and networks as being analogous to electronic switches, states and circuits in classical computers, he champions the idea that microtubules within the neuron are performing quantum computations, and that consciousness itself a kind of quantum phenomenon. If this is true, no classical computer can ever be conscious, and that means the Singularity is bogus.

Of course, that conclusion depends upon singularity = machine consciousness being correct. So, perhaps a more important question than “can a machine be conscious?” is “does the concept of the Singularity really depend upon machine consciousness?” There is a good reason for supposing this is not the case at all. Referring back to Vernor Vinge’s paper, we find he outlined several ongoing developments that could result in a profound amplification of human intelligence:

The IA Scenario: We enhance human intelligence through human-to-computer interfaces — that is, we achieve intelligence amplification (IA).
The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.
The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.
The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.

Most of these scenarios do not necessarily assume that computers themselves will become conscious. Instead, they assume that we will harness both the capabilties of human intellect together with the capabilities of computer and communications networks in order to build increasingly complex bodies of knowledge.

Take the ‘Internet Scenario’ as an example. HTML documents are designed to be understood by people rather than machines. This is unlike the spreadsheets, word processors and other applications stored on a computer, which have an underlying machine-readable data. This enables a division of labor between storage and retrieval, correlation and calculation, and data presentation handled by computers, along with the goal-setting, pattern-recognition and decision-making supplied by humans. There are ongoing efforts to build that additional layer into the Internet — data that can be understood by machines — resulting in something that is being called the “semantic web.” Once there is a common language that enables machines to represent and share data, once billions of concepts, terms, phrases, and the like become linked together, we will be in a better position to obtain meaningful and relevant results, and to facilitate automated information-gathering and research. A semantic web does not necessarily need computers that can talk to people (so passing the famous Turing test for artificial intelligence), because it is much more about computers communicating more effectively with other computers on behalf of people.

As we enter an age in which petabytes of data are routinely processed, we might not find consciousness spontaneously arising in the Cloud, but changes are certainly afoot. In a Wired magazine article (“The End of Theory’), Chris Anderson wrote about how the sheer quantity of data computer networks can handle is fundamentally changing the way we conduct research. The scientific method was born out of the golden rule that no conclusions should be drawn simply on the basis of a correlation between datasets. What was always required was a model that explained the underlying mechanics connecting X and Y. In Anderson’s words, “once you have a model, you can connect with confidence. Data without a model is just noise.”

But in the face of truly gargantuan amounts of data (such as the 350 terabytes of data the Large Hadron Collider will produce each week, or the 1 petabyte of data processed by Google’s servers every 72 minutes) the hypothesize-model-test approach to science is beginning to break down. The solution? Throw away the model. As Anderson explained, “We can analyze the data without hypotheses about what it might show…let statistical algorithms find patterns where science cannot…The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world.”

It seems likely that, 15 years from now, we shall witness a larger increase in the number of network-connected devices. Venor Vinge saw this leading to “a world come alive with trillions of devices that….communicate with their near neighbors and thus, with anything in the world.” This may well lead to more complex webs of convergent knowledge (in which complementary solutions and problems exist between groups who may be unaware of any possibility for collaboration). As links are forged between seemingly unrelated specialities, and as we outsource aspects of cognition that are better handled by our computer networks (why memorise when you’ve got Google?) this new way of understanding the world might become a change comparable to the rise of human life on Earth. And doesn’t that sound like the Singularity?

See Also

Singularity 101 w. Vernor Vinge [http://hplusmagazine.com/articles/ai/singularity-101-vernor-vinge]

Ray Kurzweil: the H+ Interview

Ray Kurzweil & the Conquest of Mt. Improbable

 

2 Comments

  1. I mean, he’s talking about the very real possibility of god, scientifically proving god. And you’re being so dense, you didn’t even take the time to understand where he is coming from.
    What he is saying, is that we and all life are plugged into a protoconsciousness that is the basis for all consciousness. In layman’s terms…that means God.

  2. What the heck are you writing about? Hameroff says no such thing. You’ve completely missed his stance. He doesn’t say that 10^16 causes consciousness. He says that that is what the ai crowd thinks, and that they are wrong. For one, he believes that we operate further down the road than the ai crowd is saying. Somewhere in the neighborhood of 10^27.
    I mean, you’ve either missed what he was saying or you are deliberately trying to undermine his theory for some other reason known only to you.

Leave a Reply