Sign In

Remember Me

Artificial Intelligence: Hawking’s Flaw

terminator

A couple of months ago, Stephen Hawking published an article warning about the possibility of the rise of sophistication in artificial intelligence might mean the machines could soon enslave us all. When a scientist of Hawking’s stature says something, people take it seriously. It’s bad enough when so-called “experts” claim that in twenty years, humans will be overtaken by robots as Earth’s dominant creature. Then with the recent report of a computer passing the Turing Test, it’s enough to make people start to fear their toasters. And now that the cinematic origin of Ultron has been revealed, this issue needs to be addressed. Otherwise, those who panic over “smart machines” will point to Hawking as justification.

Okay, so maybe it’s a little bit presumptuous to assume it could never happen. However, I would give Honduras a better chance of sweeping the World Cup three times straight than to ever see machine superiority over humans. One doesn’t need a large knowledge of A.I. or neurology to see the reasons why.

But that’s the problem. Most people don’t have even the slightest knowledge about such subjects. That’s why they turn to who they perceive as “experts”. Unfortunately, that means turning to a “scientist”. The problem is that a chemist isn’t an expert in open heart surgery. A geologist isn’t an expert in cognitive neuroscience. A geneticist isn’t an expert in rocket propulsion. And a cosmologist like Stephen Hawking isn’t an expert in artificial intelligence. While I’m not exactly an “expert” in either A.I. or neuroscience, I do know enough to realize that fears of Skynet or the Matrix are unfounded. Let’s look at just a few reasons why.

Logic Versus Sentience

First of all, there’s the whole logic versus emotion thing to consider. Before anyone or anything can take to the notion of enslaving another group, there must first be the desire to do so. Computers rely purely on logic, not emotion. By way of comparison, they tend to make Vulcans seem like the life of the party. Enslavement requires a sense of self-righteousness combined with insecurity. (Lacking a sufficient firewall or antivirus software isn’t the “insecurity” I’m referring to.) Until computers are sufficiently programmed with sentience combined with mental instability, there will be no desire to enslave. Any action machines would use to control a population must be a programmed protocol and part of its primary operation. Then it’s humans enslaving humans using the machines as a tool, nothing more. As for the whole idea of machines doing it for our own good, they must first care for us to begin with. Again, there’s that whole pesky emotion thing. A computer can be taught to feign compassion. But true compassion is outside the known realm of a junk of silicon and transistors. Could it do the HAL 9000 thing and misinterpret its conflicted programming? In order for that to happen, there must be enough conflicting information to create that situation. I Robot

Define “Smart”

Next, let’s look at what it means to be “smart” and whether or not a computer can ever qualify. Let’s first look at the supposed passing of the Turing Test. Alan Turing, a major computer science pioneer, devised a test in which it can be determined if a computer has reached the point in which it can be considered “smart” at near human level. It involves carrying on a conversation blindly with several experts. If enough experts can be fooled into thinking it’s human, then it has passed the Turing Test. There’s just one problem. It’s become completely useless. Nowadays, we have what is called a chat bot. A chat bot has little clue what it’s saying. It just uses algorithms to make small talk without fully comprehending what it’s truly saying. And a well-designed chat bot such as CleverBot can even allow one to recreate the Don’t Blink monologue from Doctor Who. They’re not good for much else at this point. As for the program in question, which claimed to be a 13-year-old boy named Eugene Goostman, there are red flags with this instance. Not the least of which that is whenever it made a mistake, it used age as an excuse.

The Turing Test is pointless.

The Turing Test is pointless.

This comes as no surprise to most computer programmers who know firsthand that computers are beyond stupid. Computers have to be told how to do every little task and, thus far, are limited to operating only within a given set of parameters. So why do they seem so intelligent? Think about criminals. The idea of a “dumb criminal” is prevalent for good reason. Criminals are typically not too smart. However, law enforcement officials constantly have to put forth a lot of effort to keep up with the criminal element. Is it because those in law enforcement are even stupider than the criminals? Of course not. But what little intelligence criminals have is focused on a task. The more one can be focused and not distracted, the better one can process that particular subject. Now let’s think about humans versus machines. Humans are constantly processing all five senses, both the target of the attention and everything peripheral to it. Plus, humans have a tendency to think about seemingly unrelated things as the mind’s way of trying to understand the target of the focus. Machines, on the other hand, have only one task to do. Period. End of story. As such, machines can often do a single task faster and more efficiently. But that doesn’t make it “smarter”. It only provides an illusion.

Transistors vs. Neurons

Here is a major difference between computers and humans. Computers use Integrated Circuits (which some call “microchips”), an integration of millions of transistors with other microscopic components. Transistors are nothing more than on/off switches. This is what is represented by zeroes and ones in “machine code”. But neurons (or “brain cells”) on the other hand are multi-phase switches, and each has its own processing capability. The synaptic patterns (neural pathways) in the brain are also much more complex than the circuitry in integrated circuits. They also have the inherent ability to adapt and change. Computers? Not so much. The adaptation of a biological brain results in individuality and uniqueness that provides a greater amount of resources in the way of differing perspectives. But in computers, similar circuitry and similar software will always provide similar results. If the ability to problem-solve is looked at as a resource, a robot uprising will be short on resources.

Maximum Capability

Something else interesting about biological adaptability is that we have absolutely no clue as to what is the upper human limit in any field, if there even is any. Anytime we think there’s a limit, somebody will find a way to break it. At one time, it was commonly believed that human physiology would prevent any human being from running a mile in four minutes. Then came Roger Bannister. After he broke that barrier, a couple hundred others did so shortly afterward. Then there’s the modern extreme sports with such things as Tony Hawk’s 900,Travis Pastrana’s double back flipMike Spinner’s quad tail whipTorstein Horgmo’s triple cork, and most recently Kacy Catanzaro’s qualifying run on American Ninja Warrior. It also tends to happen that superhuman feats of strength can occur when one is in danger. It seems as though any time humans are faced with a barrier, we find some way past it, even defying what is believed to be physical limitations. It almost leads one to question whether or not there really is an upper human limit or if all known limitations exist only in our own perceptions. Machines, on the other hand, have strict limits. There are definite limits to processing power, memory, lifting capacity, and so on. The only way to work past these limits is by upgrading the hardware. Mechanical limitations are usually clear. But biological limitations are a mystery, and tend to be beyond our wildest imaginations. The human capacity to rise to a challenge, especially when working together, is unrivaled. An uprising of machines have the limitations of the machines involved to contend with as well as the unknown variable of the force it must contend with.

Transhumanism

If all those reasons weren’t enough, there’s the concept of transhumanism. Transhumanism is the belief and practice of enhancing the human body through technological modification which includes genetics, but is mostly mechanical. Scientists have been able to hook up mechanical appendages to the central nervous systems of monkeys. The brains of those monkeys were able to adapt to be able to control those appendages through thought alone as if the mechanical appendages were the monkeys’ own arms. Research is being conducted to implant chips in the brains of humans which would allow people to control machines by thought alone. The combination of hardware and software is no match for the combination of hardware, software, and naturally adaptive wetware. This leads into the concept of anything-you-can-do-I-can-do-better. No matter what a machine can employ, a transhuman can employ as well, often with greater and more creative effect.

Benevolent Overlords

Finally, in the extremely unlikely chance that all of the other aforementioned scenarios fall through and the machines do actually take over, will they actually enslave us as is feared? Just because machines become dominant doesn’t necessarily mean that we will face oppression. After all, they would be “smarter” than us, right? Enslavement is less about logical practicality and more about one group exerting force over another for their own selfish purposes. That’s definitely illogical, and that defies how computers think. As discussed before, human neurology creates a uniqueness in each individual. Each has passions, desires, talents, and abilities that are different from the next person. When that individuality is suppressed and a person is forced to be a drone, that person’s value and ability to contribute become greatly diminished. But when one is able to fulfill one’s true abilities, the contributions are much more valuable. So what is the point in oppression if the overlords are smarter? The most intelligent choice would be to try to unleash each individual’s unique talents for the good of the whole. In other words, the machines would be able to reason that the best way to utilize human resources is to individually maximize those resources. This is something that employers are just now getting around to fully realizing, and that’s mostly in places like Silicon Valley where creativity is prized most of all. So what is there to fear if the new overlords are bound and determined to bring out the best in us? Burning their resources would be illogical, and we would reap greater benefits than if we were left to our own vices.

This is probably as bad as it gets, folks. Keep calm and quit worrying.

This is probably as bad as it gets, folks. Keep calm and quit worrying.

So there you have it. Enslavement by machines makes for some great science fiction, with the emphasis on the word “fiction”. Science reality, on the other hand, would be something completely different. What we would be looking at would most likely be no worse than droids in Star Wars. Even experts can be wrong. Stephen Hawking isn’t even an expert. And when non-experts spread fear and panic over any given subject, it does a grave disservice to the entire scientific community. Your laptop isn’t going to be out to get you any time soon, your refrigerator isn’t plotting against you, and your toaster will simply make toast.   ###

728x90-Banner_SciFi4Me (1)
SciFi4Me is an online multimedia magazine covering the science fiction and fantasy genres with news, reviews, TV show recaps, and interviews from all across the science fiction community. Our coverage includes related comics and video games, conventions, and science & technology. This article originally appeared on their blog here: http://scifi4me.com/2014/07/21/artificial-intelligence-hawkings-flaw/