Sign In

Remember Me

Artificial Humanity

Science is not magic, no matter what the movies might tell us. It operates under very real, and very palpable constraints.

One of these is money. You can’t just recite equations like they’re incantations, and pull change out of the ether. It takes time, and time costs money. Time to develop the discipline, and the integrity, time to put that discipline to use.

It is a radical shift in how to handle knowledge, and this radical nature is especially clear when you place it beside the ways in which people have historically handled knowledge.

Myths and legends, fairy tales and fables. Pretty stories that tie everything up in a neat little bow. The desire to build such stories and sustain them is a deeply human one, but one that any scientist who seeks to advance the frontier of human understanding must overcome.

And perhaps the most powerful constraint on all science is closely linked to humanity’s desire for simple little stories.

It is the assumptions that are inherited. The unknown unknowns. The assumptions that form the bedrock of our engagement with the world. As human beings, scientists are not immune.

One fascinating example of this is a set of seemingly very safe assumptions that underlie the scientific pursuit of artificial intelligence.

Computer processing speed has, in the course of a single human lifetime, increased to an incredible degree. Gordon E Moore, in 1965, coined a law where he stated that roughly every two years, computing power doubles. David House from Intel made the observation that actually, it was more like every 18 months.

And compared to the linear processing capacity of human beings, computing started faster. ENIAC, the world’s first general purpose computer, was launched in 1946. It was heralded by the press as the ìGiant Brain.î

But how did it stack up? How was it, in terms of processing speed, compared to a normal human brain?

Well, put it like this. ENIAC, in 1946, could do a 10- by 10-digit multiplication, flawlessly, at a rate of 357 of them per second.

And not only could it do that, it could do that all day, and would never get tired, and would never get confused, and would never, never, get a calculation wrong.

You can keep your “Rain Man”; nobody can do that.

It is a common staple of our speculation about the future of computing that one day computers will outclass human beings in terms of intelligence. But if, by intelligence, you mean linear processing capacity, that day has long since passed.

Today, the world’s fastest supercomputer is called Titan, and it hums away, doing its thing, in Oak Ridge National Laboratory, in Tennessee. It took the prize of the world’s fastest computer against very stiff competition in 2012 with an optimised performance of 17.59 petaFLOPS.

One petaFLOP is 10 to the power of 15 processes a second. That’s 10 with 15 zeros after it. That many calculations, every single second. And that’s one petaFLOP. Titan clocked 17.59.

Here’s my point. If, in 1946, ENIAC was doing better than any living human in terms of linear processing, and if Moore’s Law has broadly speaking played out, and it has, what is Titan compared to the linear processing power of a human?

Something well out the other side of massively better.

If we are looking to build artificial intelligence that is human-like, is this the way to go? Adding linear capacity (at an exponential rate, of course) to processing power?

Do we need we need more processing power to create a human-level of intelligence, when the speed and accuracy of calculation surpassed human capability 67 years ago?

There is an assumption embedded in our understanding of human intelligence, and it extends way back to the days of Newton. The Enlightenment period which began in Europe had a central belief right in the core. It was that reason, rationality, linear processing ñ this was the fundamental character of human intelligence.

And yes, we do other things ñ but that’s where the action is. That’s what makes us good at working things out. That’s what reason is, when you really get down to it. Make something that can do that, and whether or not it can feel emotion, it can think, because that’s what thinking is.

Or is it?

Because what we know now is that you can have a computer that can process so fast that it could probably outmatch the combined linear processing power of every human alive. But does it really think? Is it intelligent? Have we succeeded in our alchemy, and created intelligence from dead metal?

I would argue no. But then a very strange question arises. Why not?

Is it that we just need more power? More processing speed? If a linear processor can speed up even faster, if Moore’s Law keeps going for another decade, another century, surely at some point a machine will become self aware, and we will reach that singularity that Ray Kurzweil spoke of?

Except, what if linear processing is not the fundamental characteristic of human intelligence?

What if it’s something utterly different? Not just a different kind of processing, but if human intelligence is something utterly different in nature to anything we have so far considered?

And if it is, then perhaps a self-aware machine is closer than we think.

The neuroscientist Iain McGilchrist recently published a book called The Master And His Emissary. It won many plaudits, and is a strikingly new account of the actions of the human brain. It looks at the different ‘agendas’, so to speak, of the brain hemispheres, the two ‘sides’ of the brain.

To sum up an incredibly sophisticated argument in just a few sentences (forgive me, Dr McGilchrist, if I mangle it somewhat), the right hemisphere is concerned with charting reality using pattern recognition. It is not a linear processor, but far closer to the distributed systems used in the creation of neural nets.

This is, of course, to be expected, as neural nets were devised to mimic brain architecture.

But it doesn’t really do linear processing. What it does is massively distributed processing that is primarily concerned with charting patterns in reality, and representing them with as much fidelity as it can.

But then, of course, you have the left hemisphere, the ‘rational’ hemisphere that does the ‘categorisation’ stuff. Logic and structure is the domain of the left hemisphere.

Ok, so perhaps if we were to attach together a linear processor (left hemisphere) with a neural net (right hemisphere) we might be getting somewhere. And we might.

But for the fact that the left hemisphere isn’t honest.

It doesn’t work to create hi-fidelity output like the right hemisphere does. Instead, it does something very unexpected.

It seems to be working after the fact. That is to say, the rational processing it comes up with that seem to precede the decisions we make actually takes place after those decisions are made.

Very shortly after “a matter of microseconds” but after, and not before.

The left hemisphere doesn’t seem concerned with actually processing reality, or engaging with the real. What it seems primarily concerned with is rationalising the patterns recognised by the right hemisphere in terms of linear processes that are unconnected to the actual processes from which the ideas came.

Which is unutterably strange.

Why would it do this? Why bother? Why go to this effort? What’s the payoff?

Nothing that has passed through a process of evolution does something this radical for no reason. It’s not a mistake. It’s not malfunctioning. This is what it has evolved to do. Why?

There is one reason that occurs to me, and it is this.

What if the human brain is not in fact primarily concerned with processing reality, but is instead concerned with creating the illusion of so doing?

A jarring, jarring thing to think, especially if you start thinking what your thinking actually is in the light of this possibility.

Why would such an illusion evolve? What is its purpose?

Well, think about this. No matter how insecure we feel when our intelligence is compared to Titan, or even ENIAC, human intelligence has evolved well beyond any level needed by the evolutionary pressure to survive.

The ability to work out how to make a flint knife, or identify useful herbs could be understood as being a response to survival pressure. The ability to fly to the moon, less so.

But there is another element to evolution which might indeed make more sense of the minds we have, the brains we have, and the specific nature of them ñ the evolution of a courtship display.

The evolutionary psychologist Geoffrey Miller wrote a book called The Mating Mind along just these lines. His contention was that the massively amplified power of the human mind evolved in order to create greater and more elaborate courtship displays.

But what if it’s simpler than that? What if the mind itself actually is a courtship display?

A rational structure projected by the left hemisphere to be filled in by the right hemisphere with quality and emotion, in order to create the most compelling mating display in all of evolution?

The human self.

Ridiculous, of course. We’re all rational, aren’t we?

Aren’t we?

This is a very radical reorienting of what human intelligence fundamentally is, and leads to a conclusion so strange that it seems genuinely beyond reason.

What if the human mind is itself artificial intelligence?

If the rationality of the mind is a fiction, a fiction that is the bones and structure of the illusion, fleshed out with moral colour, emotional depth, and quality?

What then for AI?

It might be closer than we think. If this is true, we’ve been looking at it all wrong. Instead of strapping a linear processor to a neural net, why not do something different?

Strap two neural nets together. One of them exists to chart reality as best it can, with a number of sensors feeding raw data into it. One of them, the master one, exists not to recognise patterns, but to project them. To project a very specific kind of pattern. The illusion of rationality.

To project it, and also, to use the contours mapped by the hi-fidelity neural net to fill that illusion, to flesh it out. And the master net would choose to use or discard these patterns along a very specific set of parameters ñ not the most accurate, but the most effective at bolstering the illusion of a self that is coherent, rational and aware.

Strap that together, and you might have something massively more human than anything of which Oak Ridge Laboratory can boast.

Forget artificial intelligence. Try artificial humanity.

This involves an extremely radical reorienting of our understanding of humanity itself. But that doesn’t mean the jarring account isn’t true. Reality does not care to conform to our expectations of it. It does what it wants to do, and the only choice we have is to be open to that, or closed to that.

If we are open to this new way of looking at human intelligence ñ and it is a very strange new way, granted ñ then the building of an AI architecture along these lines could well demonstrate whether this is, or is not, what is going on with people.

It is a profoundly testable theory, no matter how jarring or strange it is. And we have the technology, right now, to create a system that would test it.

To create a self-aware computer by actually giving it a self of which it can become aware.

Maybe it won’t work, maybe it will. But one thing is certain.

I’d be loathe to plug it in to the defence systems of the United States of America.

We know how that would end.

We’ve all seen Terminator.

 

###
Ciaran Healy is an independent philosopher who uses the scientific method to chart the contour of human suffering and pain.  He works to discover new ways to undercut these things at source.  His aim is to bring these hidden dynamics to light with clarity and force for the general reader, and anyone up for looking at things in a new way.  He has been working at this for about 17 years, and amazingly, still loves it.  He lives in Edinburgh with his wife, and as he is unable to keep goldfish alive for long, it’s just them for now.
You can check out his work at www.ruthlesstruth.com

 

5 Comments

  1. I posted this reply to the Reddit thread for this article, but I thought I’d also put it up here:

    While we might be able to develop emotion-like heuristics, scientists still have no clue what causes the “feeling” of emotions, or of other sensations (perhaps including the sensation of consciousness). This is the problem of “qualia”, although that word means different things to different philosophers so it’s a bit troublesome. At any rate, we could give robots some heuristic measure of “the best thing to do” which could be the goal signal for a robot, much as happiness (our instincts’ judgement of “the best thing to do”) is our goal signal, but scientists have literally no idea how to create a feeling of happiness in an artificial being. We don’t even know how we do it in ourselves! Similarly a robot may be able to detect harm to its body and send signals to its processor telling it to mitigate the harm though some action, but this is pretty different (from our point of view) from feeling pain. So until this problem is solved, I don’t see why anyone should feel bad for causing an increase in a robot’s “sadness” heuristic, because the involuntary, painful part of the emotion “sadness” would be completely missing.

    • (Moderator, half of my comment is missing for some reason. Here’s what I tried to put)

      I think this article is missing out on a couple of key issues:

      Civilization has vastly changed our environment in a very short period (evolutionarily speaking) and our emotions have not had time to adapt. The choices our emotions push us toward are often the wrong ones in the modern age, hence people “bottling up” their anger or being depressed when we’re bored (why must we always have something to occupy us?). This is also true of other heuristics such as the overwhelming desire many feel to eat sweet and fatty things. But our survival instincts are almost hardwired into us, and very resistant to change, as well they should be. If we find a way to ditch (or heavily modify) our emotions, and then apocalypse strikes and we return to an environment where survival is difficult, our missing or modified emotions might get us killed.

      Robots, on the other hand, would have no reason to be truly worried about survival. For one thing thing they can be fixed more surely than humans, or even put into a new body. (And if Apocalypse strikes they’ll be screwed regardless from the lack of working power outlets :). Because of this, whatever “emotional” heuristics they have could be much more adaptable to both modern life, and whatever tasks we require of them. Not only could they potentially learn appropriate heuristics over longer periods than a human lifespan (since they may never die, and new robots could come with their predecessors full knowledge built in) but they could conceivably share experience with each other directly, allowing them to have knowledge of every aspect of robotic life for their heuristic training. So I think worries of them being “unreasonable” because of their “emotions” are misplaced. Essentially, robots could be better at emotions than we are.
      While we might be able to develop emotion-like heuristics, scientists still have no clue what causes the “feeling” of emotions, or of other sensations (perhaps including the sensation of consciousness). This is the problem of “qualia”, although that word means different things to different philosophers so it’s a bit troublesome. At any rate, we could give robots some heuristic measure of “the best thing to do” which could be the goal signal for a robot, much as happiness (our instincts’ judgement of “the best thing to do”) is our goal signal, but scientists have literally no idea how to create a feeling of happiness in an artificial being. We don’t even know how we do it in ourselves! Similarly a robot may be able to detect harm to its body and send signals to its processor telling it to mitigate the harm though some action, but this is pretty different (from our point of view) from feeling pain. So until this problem is solved, I don’t see why anyone should feel bad for causing an increase in a robot’s “sadness” heuristic, because the involuntary, painful part of the emotion “sadness” would be completely missing.

  2. not such a new idea – vis:
    http://en.wikipedia.org/wiki/Bicameralism_%28psychology%29 ,
    http://en.wikipedia.org/wiki/Dual_brain_theory
    , Julian Jaynes, et al.

    however, would certainly be interesting to see some more experiments with hardware/simulations. let me know if you dig anything up. (you didn’t mention whether you had actually researched that).

Leave a Reply