Is Automation Making Us Stupid?

GlassCage250I’m currently reading Nicholas Carr’s book The Glass Cage: Automation and Us. I think it is an important contribution to the ongoing debate about the growth of AI and robotics, and the future of humanity. Carr is something of a techno-pessimist (though he may prefer ‘realist’) and the book continues the pessimistic theme set down in his previous book The Shallows (which was a critique of the internet and its impact on human cognition). That said, I think The Glass Cage is a superior work. I certainly found it more engaging and persuasive than his previous effort.

Anyway, because I think it raises some important issues, many of which intersect with my own research, I want to try to engage with its core arguments on this blog. I’ll do so over a series of posts. I start today with what I take to be Carr’s central critique of the rise of automation. This critique is set out in chapter 4 of his book. The chapter is entitled ‘The Degeneration Effect’, and it makes a number of arguments (though none of them are described formally). I identify two in particular. The first deals with the effects of automation on the quality of decision-making (i.e. the outputs of decision-making). The second deals with the effects of automation on the depth and complexity of human thought. The two are connected, but separable. I want to deal with them separately here.

In the remainder of this post, I will discuss the first argument. In doing so, I’ll set out some key background ideas for understanding the debate about automation.

1. The Nature of the Automation Loop

I’ve discussed the phenomenon before on this blog. Specifically, I have discussed the phenomenon of algorithm-based decision-making systems. They are a sub-type of automated system in which a computer algorithm takes over a decision-making function that was once performed by a human being.

In discussing that phenomenon, I attempted to offer a brief taxonomy of the possible algorithm-based systems. The taxonomy made distinctions between (i) human in the loop systems (in which humans were still necessary for the decision-making to take place); (ii) human on the loop systems (in which humans played some supervisory role) and (iii) human off the loop systems (which were fully automated and prevented humans from getting involved). The taxonomy was not my own; I copied it from the work of others. And while I still think that this taxonomy has some use, I now believe that it is incomplete. This is for two reasons. First, it doesn’t clarify what the ‘loop’ in question actually is. And second, it doesn’t explain exactly what role humans may or may not be playing in this loop. So let’s try to add the necessary detail now with a refined taxonomy.

Let’s start by clarifying the nature of the automation loop. This is something Carr discusses in his book by reference to historical examples. The best of these is the automation of anti-aircraft missiles after the end WWII. Early on in that war it was clear that the mental calculations and physical adjustments that were needed in order to fire an anti-aircraft missile effectively were too much for any individual human to undertake. Scientists worked hard to automate the process (though they didn’t succeed fully until after the war — at least as I understand the history):

This was no job for mortals. The missile’s trajectory, the scientists saw, had to be computed by a calculating machine, using tracking data coming in from radar systems along with statistical projections from a plane’s course, and then the calculations had to be fed automatically into the gun’s aiming mechanism to guide the firing. The gun’s aim, moreover, had to be adjusted continually to account for the success or failure of previous shots.

(Carr 2014, p 35)

The example illustrates all the key components in an automation loop. There are four in total:

(a) Sensor: some machine that collects data about a relevant part of the world outside the loop, in this case the radar system.

(b) Processor: some machine that processes and identifies relevant patterns in the data being collected, in this case some computer system that calculates trajectories based on the incoming radar data and issues instructions as to how to aim the gun.

c) Actuator: some machine that carries out the instructions issued by the processor, in this case the actual gun itself.

(d) Feedback Mechanism: some system that allows the entire loop to learn from its previous efforts, i.e. allows it to collect, process and act in more efficient and more accurate ways in the future. We could also call this a learning mechanism. In many cases humans still play this role by readjusting the other elements of the loop.

These four components should be familiar to anyone with a passing interest in cognitive science and AI. They are, after all, the components in any intelligent system. That is no accident. Since automated systems are designed to take over tasks from human beings they are going to try to mimic the mechanisms of human intelligence.

Automation loops of this sort will come in many different flavours, as many different flavours as there are different types of sensor, processor, actuator and learning mechanism (up to the current limits of technology). A thermostat is a very simple type of automation loop: it collects temperature data, processes it by converting it into instructions for turning on or off the heating system. It then makes use of negative feedback to constantly regulate the temperature in a room (modern thermostats like the Nest have more going on). A self-driving car is a much more complicated type of automation loop: it collects visual data, processes it quite extensively by identifying and categorising relevant patterns, and then uses this to issue instructions to an actuating mechanism that propels the vehicle down the road.

Humans can play a variety of different roles in such automation loops. Sometimes they might be sensors for the machine, collecting and feeding it relevant data. Sometimes they might play the processing role. Sometimes they could be actuators, i.e. the muscle that does the actual work. Sometimes they might play one, two or all three of these roles. Sometimes they might share these roles with the machine. When we think about humans being in, on, or off the loop, we need to keep in mind these complexities.

To give an example, the car is a type of automation device. Traditionally, the car just played the part of the actuator; the human was the sensor and processor, collecting data and issuing instructions to the machine. The basic elements of this relationship now remain the same, albeit there is some outsourcing and sharing of sensory and processing functions with the car’s onboard computers. So, for example, my car can tell me how close I am to an object by making a loud noise; it can keep my car travelling at a constant speed when cruising down a motorway; and it can even calculate my route and tell me where to go using its built-in GPS. I’m still very much involved in the loop; but the machine is taking over more of the functions I used to perform myself.

Eventually, the car will be a fully automated loop, with little or no role for human beings. Not even a supervisory one. Indeed, some manufacturers want this to happen. Google, reportedly, want to remove steering wheels from their self-driving cars. Why? Because it is only when humans take over that accidents seem to happen. The car will be safer if left to its own devices. This suggests that full automation might be better for the world.

2. The Consequences of Automation for the External World
Automation is undertaken for a variety of reasons. Oftentimes the motivation is benevolent. Engineers and technicians want to make systems safer and more effective, or they want to liberate humans from the drudge work, and free them up to perform more interesting tasks. Other times the motivation might be less benevolent. Greedy capitalists might wish to eliminate human workers because it is cheaper, and because humans get tired and complain too much.

There are important arguments to be had about these competing motivations. But for the time being let’s assume that benevolent motivations predominate. Does automation always succeed in realising these benevolent aims? One of Carr’s central contentions is that it frequently does not. There is one major reason for this. Most people adhere to something called the ‘substitution myth’:

Substitution Myth: The belief that when a machine takes over some element of a loop from a human, the machine is a perfect substitute for the human. In other words, the nature of the loop does not fundamentally change through the process of automation.

The problem is that this is false. The automated component of the loop often performs the function in a radically different way and this changes both the other elements of the loop and the outcome of the loop. In particular, it changes the behaviour of the humans who operate within the loop or who are affected by the outputs of the loop.

PLC Based Automation from http://mrinaa-automation.in/solutions/plc-based-automation/

Two effects are singled out for consideration by Carr, both of which are discussed in the literature on automation:

Automation Complacency: People get more and more comfortable allowing the machine to take complete control.

Automation Bias: People afford too much weight to the evidence and recommendations presented to them by the machine.

You might have some trouble understanding the distinction between the two effects. I know I did when I first read about them. But I think the distinction can be understood if we look back to the difference between human ‘in the loop’ and ‘on the loop’ systems. As I see it, automation complacency arises in the case of a human on the loop system. The system in question is fully automated with some limited human oversight (i.e. humans can step in if they choose). Complacency arises when they choose not to step in. Contrariwise,automation bias arises in the case of a human in the loop system. The system in question is only partially automated, and humans are still essential to the process (e.g. in making a final judgment about the action to be taken). Bias arises when they don’t second-guess or go beyond recommendations given to them by the machine.

There is evidence to suggest that both of these effects are real. Indeed, you have probably experienced some of these effects yourself. For example, how often do you second-guess the route that your GPS plans for you? But so what? Why should we worry about them? If the partially or fully automated loop is better at performing the function than the previous incarnation, then isn’t this all to the good? Could we not agree with Google that things are better when humans are not involved?

There are many responses to these questions. I have offered some myself in the past. But Carr clearly thinks that these two effects have some seriously negative implications. In particular, he thinks that they can lead to sub-optimal decision-making. To make his point, he gives a series of examples in which complacency and bias led to bad outcomes. I’ll describe four of them here.

I’ll start with two examples of complacency. The first is the case of the 1,500 passenger ocean liner Royal Majesty, which ran aground on a sandbar near Nantucket in 1995. The vessel had been traveling from Bermuda to Boston and was equipped with a state of the art automated navigation system. However, an hour into the voyage a GPS antenna came loose and the ship proceeded to drift off course for the next 30 hours. Nobody on board did anything to correct for the mistake, even though there were clear signs that something was wrong. They didn’t think to challenge the wisdom of the machine.

A similar example of complacency comes from Sherry Turkle’s work with architects. In her book Simulation and its Discontents she notes how modern-day architects rely heavily on computer-generated plans for the buildings they design. They no longer painstakingly double-check the dimensions in their blueprints before handing the plans over to construction crews. This results in occasional errors. All because they have become reluctant to question the judgment of the computer program.

As for bias, Carr gives two major examples. The first comes from drivers who place excessive reliance on GPS route planners when driving. He cites the 2008 case of a bus driver in Seattle. The top of his bus was sheared off when he collided with a concrete bridge with a nine-foot clearance. He was carrying a high-school sports team at the time and twenty one of them were injured. He said he did not see the warning lights because he was busy following the GPS instructions.

The other example comes from the decision support software that is nowadays used by radiographers. This software often flags particular areas of an X-ray scan for closer scrutiny. While this has proven helpful in routine cases, a 2013 study found that it actually reduces the performance of expert readers in difficult cases. In particular, it ia found that the experts tend to overlook areas of the scans not flagged by the software, but which could be indicative of some types of cancer.

These four examples support the claim that automation complacency and automation bias can lead to inferior outcomes.

But is this really persuasive? I think there are some problems with the argument. For one thing, some of these examples are purely anecdotal. They highlight sub-optimal outcomes in certain cases, but they involve no proper control data. The Royal Majesty may have run aground in 1995 but how many accidents have been averted by the use of automated navigation systems? And how many accidents have arisen through the fault of human operators? (I can think of at least two high profile passenger-liner accidents in the past couple of years, both involving human error). Likewise, the bus driver may have crashed into the bridge, but how many people have gotten to their destinations faster than they otherwise would have through the use of GPS? I don’t think anecdotes of this sort are a good way to reach general conclusions about the desirability of automation systems.

The work on radiographers is more persuasive since it shows a deleterious comparative effect in certain cases. But, at the same time, it also found some advantages to the use of the technology. So the evidence is more mixed there. Now, I wouldn’t want to make too much of all this. Carr provides other examples in the book that make a good point about the potential costs of automation. For instance, in chapter five he discusses some other examples of the negative consequences of automation and digitisation in the healthcare sector. So there may be a good argument to be made about the sub-optimal nature of automation. But I suspect it needs to be made much more carefully, and on a case-by-case basis.

In saying all this, I am purely focused on the external effects of automation, i.e. the effects with respect to the output or function of the automated system. I am not concerned with the effects on the humans who are being replaced. One of Carr’s major arguments is that automation has deleterious effects for them too, specifically with respect to the degeneration of their cognitive functioning. This turns out to be a far more interesting argument.

Specifically, I want to look at another one of Carr’s arguments, perhaps the central argument in his book: the degeneration argument.

According to this argument, we should not just worry about the effects of automation on outcomes; we should worry about its effects on the people who have to work with or rely upon automated systems. Specifically, we should worry about its effects on the quality of human cognition. It could be that automation is making us stupider. This seems like something worth worrying about.

Let’s see how Carr defends this argument.

1. The Whitehead Argument and the Benefits of Automation
To fully appreciate Carr’s argument, it is worth comparing it with an alternative argument, one that defends the contrary view. One such argument can be found in the work of Alfred North Whitehead. Alfred North Whitehead was a famous British philosopher and mathematician, probably best known for his collaboration with Bertrand Russell onPrincipia Mathematica. In his 1911 work, An Introduction to Mathematics, Whitehead made the following claim:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

Whitehead may not have had modern methods of automation in mind when he wrote this — though his work did help to inaugurate the computer age — but what he said can certainly be interpreted by reference to them. For it seems like Whitehead is advocating, in this quote, the automation of thought. It seems like he is saying that the less mental labour humans need to expend, the more ‘advanced’ civilization becomes.

But Carr thinks that this is a misreading, one that is exacerbated by the fact that most people only quote the line starting ‘civilization advances…’ and leave out the rest. If you look at the last line, the picture becomes more nuanced. Whitehead isn’t suggesting that automation is an unqualified good. He is suggesting that mental labour is difficult. We have a limited number of ‘cavalry charges’. We should not be expending that mental effort on trivial matters. We should be saving it for the ‘decisive moments’.

This, then, is Whitehead’s real argument: the automation of certain operations of thought is good because it frees us up to think the more important thoughts. To put it a little more formally (and in a way that Whitehead may not have fully endorsed but which suits the present discussion):

  • (1) Mental labour is difficult and finite: time spent thinking about trivial matters limits our ability to think about more important ones.
  • (2) It is good if we have the time and ability to think the more important thoughts.
  • (3) Therefore, it would be good if we could reduce the amount of mental labour expended on trivial matters and increase the amount spent on important ones.
  • (4) Automation helps to reduce the amount of mental labour expended on trivial matters.
  • (5) Therefore, it would be good if we could automate more mental operations.

This argument contains the optimism that is often expressed in debates about automation and the human future. But is this optimism justified?

Women programmers of ENIAC

 

2. The Structural Flaws in the Whitehead Argument
Carr doesn’t think so. Although he never sets it out in formal terms, I believe that his reason for thinking this can be understood in light of the preceding version of Whitehead’s argument. Look again at premise (4) and the inference from that premise and premise (3) to (5). Do you think this forms a sound argument? You shouldn’t. There are at least two problems with it.

In the first place, it seems to rely on the following implicit premise:

  • (6) If we reduce the amount of mental labour expended on trivial matters, we will increase the amount expended on more important ones.

This premise — which is distinct from premise (1) — is needed if we wish to reach the conclusion. Without it, it does not follow. And once this implicit premise is made explicit, you begin to see where the problems might lie. For it could be that humans are simply lazy. That if you free them from thinking about trivial matters, they won’t expend the excess mental labour on thinking the hard thoughts. They’ll simply double down on other trivial matters.

The second problem is more straightforward, but again highlights a crucial assumption underlying the Whitehead argument. The problem is that premise (5) seems to be assuming that automation will always be focused on the more trivial thoughts, and that the machines will never be able to take away the higher forms of thinking and creativity. This assumption may also turn out to be false.

We have then two criticisms of the Whitehead argument. I’ll give them numbers and plug them into an argument map:

  • (7) In freeing us up from thinking trivial thoughts, automation may not lead to us thinking the more important ones: we may simply double-down on other trivial thoughts.
  • (8) Automation may not be limited to trivial matters; it may take over the important types of thinking too.

But this is to speak in fairly abstract terms. Are there any concrete reasons for thinking these implicit premises and underlying assumptions do actually count against the Whitehead argument? Carr thinks that there are. In particular, he thinks that there is some strong evidence from psychology suggesting that the rise of automation doesn’t simply free us up to think more important thoughts. On the contrary, he thinks that the evidence suggests that the creeping reliance on automation is degenerating the quality of our thinking.

NASA Studies Mental Overload in Pilots http://www.nasa.gov/topics/aeronautics/features/pilot_cognition.html

 

3. The Degeneration Argument
Carr’s argument is quite straightforward. It starts with a discussion of the generation effect. This is something that was discovered by psychologists in the 1970s. The original experiments had to do with memorisation and recall. The basic idea is that the more cognitive work you have to do during the memorisation phase, the better able you are to recall the information at a future date. Suppose I gave you a list of contrasting words to remember:

HOT: COLD

TALL: SHORT

How would you go about doing it? Unless you have some familiarity with memorisation techniques (like the linking or loci methods), you’d probably just read through the list and start rehearsing it in your mind. This is a reasonably passive process. You absorb the words from the page and try to drill them into your brain through repetition. Now suppose I gave you the following list of incomplete word pairs, and then asked you to both (a) complete the pairs; and (b) memorise them:

HOT: C___

TALL: S___

This time the task requires more cognitive effort. You actually have to generate the matching pair in your mind before you can start trying to remember the list. In experiments, researchers have found that people who were forced to take this more effortful approach were significantly better at remembering the information at a later point in time. This is the generation effect in action. Although the original studies were limited to rote memorisation, later studies revealed that it has a much broader application. It helps with conceptual understanding, problem solving, and recall of more complex materials too. As Carr puts it, these experiments show us that ‘our mind[s] rewards us with greater understanding’ when we exert more focus and attention.

The generation effect has a corollary: the degeneration effect. If anything that forces us to use our own internal cognitive resources will enhance our memory and understanding, then anything that takes away the need to exert those internal resources will reduce our memory and understanding. This is what seems to be happening in relation to automation. Carr cites the experimental work of Christof van Nimegen in support of this view.

Van Nimegen has done work on the role of assistive software in conceptual problem solving. Some of you are probably familiar with the Missionaries and Cannibals game (a classic logic puzzle about getting a group of missionaries across a river without being eaten by a cannibal). The game comes with a basic set of rules and you must get the missionaries across the river in the least number of trips while conforming to those rules. Van Nimegen performed experiments contrasting two groups of problem solvers on this game. The first group worked with a simple software program that provided no assistance to those playing the game. The second group worked a software program that offered on-screen prompts, including details as to which moves were permissible.

The results were interesting. People using the assistive software could solve the puzzles and performed better at first thanks to the assistance. But they faded in the long-run. The second group emerged as the winners: they solved the puzzles more efficiently and with fewer wrong-moves. What’s more, in a follow up study performed 8 months later, it was found that members of the second group were better able to recall how to solve the puzzle. Van Nimegen went on to repeat that result in experiments involving different types of task. This suggests that automation can have a degenerating effect, at least when compared to traditional methods of problem-solving.

Carr suggests that other evidence confirms the degenerating effect of automation. He cites an example of a study done on accounting firms using assistive software, which found that human accountants relying on this software had a poorer understanding of risk. Likewise, he gives the (essentially anecdotal) example of software engineers relying on assistive programs to clear-up their dodgy first draft code. In the words of one Google software developer, Vivek Haldar, this has led to “Sharp tools, dull minds.”

Summarising all this, Carr seems to be making the following argument. This could be interpreted as an argument in support of premise (7), given above. But I prefer to view it as a separate counterargument because it also challenges some of the values underlying the Whitehead argument:

  • (9) It is good if humans can think higher thoughts (i.e. have some complexity and depth of understanding).
  • (10) In order to think higher thoughts, we need to engage our minds, i.e. use attention and focus to generate information from our own cognitive resources (this is the ‘generation effect’).
  • (11) Automation inhibits our ability to think higher thoughts by reducing the need to engage our own minds (the ‘degeneration effect’).
  • (12) Therefore, automation is bad: it reduces our ability to think higher thoughts.

 


4. Concluding Thoughts
What should we make of this argument? I am perhaps not best placed to critically engage with some aspects of it. In particular, I am not best placed to challenge its empirical foundation. I have located the studies mentioned by Carr and they all seem to support what he is saying, and I know of no competing studies, but I am not well-versed in the literature. For this reason, I just have to accept this aspect of the argument and move on.

Fortunately, there are two other critical comments I can make by way of conclusion. The first has to do with the implications of the degeneration effect. If we assume that the degeneration effect is real, it may not imply that we are generally unable to think higher thoughts. It could be that the degeneration is localised to the particular set of tasks that is being automated (e.g. solving the missionaries and cannibals game). And if so, this may not be a big deal. If those tasks are not particularly important, humans may still be freed up to think the more important thoughts. It is only if the effect is more widespread that a problem arises. And I don’t wish to deny that this could be the case. Automation systems are becoming more widespread and we now expect to rely upon them in many aspects of our lives. This could result in the spillover of the degeneration effect.

The other comment has to do with the value assumption embedded in premise (9) (which was also included in premise (2) of the Whitehead argument). There is some intuitive appeal to this. If anyone is going to be thinking important thoughts I would certainly like for that person to be me. Not just for the social rewards it may bring, but because there is something intrinsically valuable about the act of high-level thinking. Understanding and insight can be their own reward.

But there is an interesting paradox to contend with here. When it comes to the performance of most tasks, the art of learning involves transferring the performance from the conscious realm to the sub-conscious realm. Carr mentions the example of driving: most people know how difficult it is to learn how to drive. You have perform a sequence of smoothly coordinated and highly unnatural actions. This takes a great deal of cognitive effort, at first, but over time it becomes automatic. This process is well-documented in the psychological literature and is referred to as ‘automatization’. So, ironically, developing our own cognitive resources may simply result in further automation, albeit this time automation that is internal to us.

The assumption is that this internal form of automation is superior to the external form of automation that comes with outsourcing the task to a machine. But is this necessarily true? If the fear is that the externalisation causes us to lose something fundamental to ourselves, then maybe not. Could the external technology not simply form part of ‘ourselves’ (part of our minds)? Would externalisation not then be ethically on a par with internal automation? This is what some defenders of the external mind hypothesis like to claim, and I discuss the topic at greater length in another post. I direct the interested reader there for more.

###

John Danaher is an academic with interests in the philosophy of technology, religion, ethics and law. John holds a PhD specialising in the philosophy of criminal law (specifically, criminal responsibility and game theory). He formerly was a lecturer in law at Keele University, interested in technology, ethics, philosophy and law. He is currently a lecturer at the National University of Ireland, Galway (starting July 2014).

He blogs at http://philosophicaldisquisitions.blogspot.com and can be found here: https://plus.google.com/112656369144630104923/posts

This article originally appeared on John’s site as two different posts here and here. Republished under creative commons license.