Wired For War or How We Learned to Stop Worrying and Let Dystopian SF Movies Inspire our Military Bots

Wired for War by P.W. SingerP.W. Singer’s latest book, Wired For War: The Robotics Revolution and Conflict in the 21st Century, is exciting, fascinating and frightening. Singer covers the history of robotics for warfare (and robot history in general) before delving into the dizzying plethora of robotic systems being developed and/or used at a tremendously accelerated rate.

With loving detail, Singer describes all the new awesome bot warriors and the players involved in creating them. At the same time, he explores the horrors and dangers — both potential and current — and raises the alarming and increasingly unavoidable ethical issues that are bubbling to the surface as war increasingly becomes robot war.

Beyond that, Singer is a total science fiction geek, so the book in some sense hinges around the degree to which fictional fantasy is turning into reality. With the upcoming release of Terminator Salvation on both of our minds, he was clearly excited to lay out the many correlations between one of our leading cinematic mythologies and the actual evolution of robots for war.

Singer is a Senior Fellow at Brookings Institute, where he is Director of the 21st Century Defense Initiative. He was also coordinator of the Defense Policy Task Force for Barack Obama’s 2008 presidential campaign. Other books have included Children at War and Corporate Warriors: The Rise of the Privatized Military.

h+: What were some of the more impressive or frightening things that you discovered about robotics in warfare that you discovered in researching the book?

PWS: It’s impressive when you break it into three different directions that robotics and war are headed in. For one thing, there’s the raw numbers, in terms of the use of these robotic systems. We’ve gone from a handful of drones during the Iraq invasion to more than 7,000 now in the U.S. military inventory. On the ground, we had zero unmanned vehicles before the invasion of Iraq. We now have over 12,000. And this is just the start.

This spread is also continuing in terms of the domains that they fight in. In the air, the U.S. purchased more unmanned planes than manned planes last year, and that will continue to increase. And use of these systems are increasing on the ground and at sea. And then we’re starting to develop these systems for space and even cyberspace.

And it’s not just an American expansion. It’s global. There are 43 countries working on military robotics right now. So you have this just huge… immense growth. The best way to imagine where we’re going is to look at what Bill Gates says about robotics. He says, “Robotics are about where computers were in 1980.”

A… scientist talked about how the military came to him and said, “Oh, we’d like you to design the hunter-killer drone from the Terminator movies.”

The second impressive aspect of this is the new size and shapes — the forms that these robots come in. There are these huge robots such as planes with wings the length of a football field that are designed to stay up in the air not just hours, not just days…. but literally for weeks, months, even years. So there you have incredible possibilities in terms of the roles they might play, where it’s almost like a spy satellite in the sky. You can even contemplate it as a flying cell phone tower, a flying gas station… you name it. And then at the other end of the size and shape spectrum, they’ve got the teeny tiny robots. I saw one system that would fit on the tip of your finger. You can think of it as bugs with bugs — insect-like and insect size, but carrying James Bond surveillance bugs.

The third impressive aspect is their ever-greater intelligence and autonomy. We’ve gone from having systems where we remote controlled every single thing that they could do to systems where the human role is more managerial or supervisory. We’re slowly pushing ourselves outside of the loop. For example, the Global Hawk is a drone that’s the size of a plane. It’s a replacement for the U-2 spy plane. It can take off on its own; fly 3,000 miles on its own; carry out its mission on its own; turn around and fly itself back 3,000 miles on its own, and land itself. So it’s not so much being piloted as it’s being supervised or managed.

And then you get to the interactivity of these robots. There’s incredible work on social robots that can recognize facial expressions and then, in turn, give their own facial expressions. And this is going to continue, because you have Moore’s Law going on here, where our systems — our microchips are doubling in their computing power just about under every two years or so. And that means that the kind of systems that we have today really are the Model T Ford. They’re the Wright Brothers flyers as compared to what’s coming. If Moore’s law holds true, the way it has held true for the last several decades, within 25 years our systems may be as much as a billion times more powerful than today. And so this all sounds like science fiction, and yet it is real right now. It’s a technologic reality and a political reality.

h+: ….which raises the inevitable Terminator question. Did you see anything that made you think of that film, particularly?

PWS: Oh, god. You know, what didn’t? I mean, I can think of just wonderful layers of anecdotes upon anecdotes about that.

I’ll give you four things that sort of jumped out at me and that I write about in the book. For one thing — it’s interesting where the scientists get their ideas about what to build. There’s a section in the book about the role that science fiction is playing in directly influencing battlefield reality. And I went around interviewing not just the scientists who design and build these systems, but the science fiction creators who inspire them. And I recall one of the scientists talking with incredible admiration about the robots in the opening scene of Terminator 2, where the robots are walking across the battlefield. This is basically what Terminator Salvation is about — that’s the world that movie is going to play in, right? And he was like: “This is incredibly impressive stuff.” You know, yeah, it’s stepping on a human skull, but it’s still really impressive.

Another scientist talked about how the military came to him and said, “Oh, we’d like you to design the hunter-killer drone from the Terminator movies.” Which, you know, is kind of incredibly scary, but it makes perfect sense from another perspective in that if it’s effective for SkyNet, their thinking is: “Well, it could be really neat in our real-world battlefields.”

wired for warThe second example is all about perception issues. I didn’t just ask people what do we think of our robots. I asked, what do other people think about our robots? What do the insurgents think about our robots? What do news reporters in places like Pakistan or Lebannon think about this? And so I remember doing one of the interviews with an Air Force officer and asking him, “What do you think the experience of a Predator drone attack is like?”

And he said, “You know, it’s probably like the opening scene of the Terminator movies, where the human are hiding out in the caves and the bunkers, and this sort of relentless robotic foe is coming at them. That’s what I bet it’s like for the Al-Qaida and Taliban.” And there’s sort of an irony there, in that we — the watchers of the movie — are supposed to be cheering for the humans on the ground. And the humans on the ground eventually get over their fears of this relentless foe and fight back. So it’s a sort of weird irony when you think about it that way.

The third example came up recently. These machines have incredible capabilities and offer immense possibilities. But then, you know, every revolution has two sides to it. And one of the challenges of some of these systems is that they gather scads and scads of data. It’s too much data, actually. One of the Air Force officers I met with, who was in charge of targeting for CENTCOM, described how every morning he would get a three-inch high stack of intelligence reports and he’d have about 20 minute to go through it and make his decisions. So, as he put it, there’s a lot of data falling on the floor. General Norman Schwartz, the chief of staff of the Air Force, told us in a talk at Brookings Institute: “There’s no way we can hire enough intelligence analysts to go through all that data. So we’re going to have to turn it over increasingly to computers and AI to sift through all the data that we get and tell us what’s important.”

And so I said, “You know, that’s interesting. I actually have the perfect name for that AI that will go through all the information that we have. We could call it SkyNet.”

The fourth example is the role of what I call the refuseniks. The refuseniks are scientists working in robotics who are starting to worry that they are becoming too much like the Dyson character in the Terminator chronology. Dyson was the scientist who invents SkyNet and then learns to his horror what it did. And he has this wonderful quote: “You’re judging me on things I haven’t even done yet. Jesus. How were we supposed to know?” And that’s in the world of fiction.

The refuseniks are also thinking about the world of history, most particularly what happened in the 1940s to the nuclear physicists who became so enamored of building this incredible technology — what became the atomic bomb — that they never took a step back and went, “Oh my gosh. What does this all mean?” And even more to the point — they tricked themselves into believing they were going to be the ones in charge of how it would be used. And, of course, after the Manhattan project, they weren’t the ones in charge. And a lot of them asked themselves, “My god, what did I do?” And a lot of the people who invented the atomic bomb then became the founders of the arms control movement to ban the atomic bomb. So the refuseniks are real-world roboticists… a sort of small movement in the robotics field. As I joke, they’re the roboticists who just say no. They’re the ones who are saying, “You know what? I don’t want to accept military funding for what I’m doing. I’m not going to work on war bots.” And that’s another Terminator-like parallel. But just like in the Terminator chronologies, just because you’ve got some people who decide to change their minds, it doesn’t mean that the entire field isn’t moving in that direction.

Air Craft

h+: When I saw the refuseniks chapter in your book, my first thought was, “My god. That’s a very small chapter.” (laughter)

PWS: And it’s a very small movement.

h+: Right. It’s always interested me that during and right after the Vietnam war, you had a lot of people who refused to work with the military. And that changed. But you might think that after the Iraq invasion, more people would start having second thoughts again. But you don’t really see that. It seems like they’re on the gravy train and it doesn’t seem like anybody really wants to think about stepping off of it.

PWS: It’s a good question. I spoke at Carnegie Mellon recently and got a great reception by some people. And then this professor angrily emailed me afterward, basically saying (paraphrasing): “I’m so upset that you troubled our graduate students by forcing them to think about whether to take military funding or not.” (Facetiously) Oh my gosh. Horrors.

Just to be really clear — I’m not saying don’t take military funding. And you’ll notice, in the book, that I show a strong sense of admiration for the refuseniks but I also show a strong sense of admiration for the people who say, “I do this because I think it saves American soldiers’ lives.” But I don’t have respect for the people who try to have it both ways — the one’s who say, “Oh, I take military funding, but I have nothing to do with the military. I’m not linked to war in any way…. You know, that head-in-the-sand phenomenon. Unfortunately, that describes a large portion of the people in the field. They are the one’s who take money from DARPA and don’t want to think about what the D stands for.

h+: Let’s return to the question of autonomous bots. What is the level of autonomy right now? I mean, are bots making decisions to pull a trigger or drop a bomb yet? Or if not, how close are we to that sort of thing?

PWS: Autonomy isn’t a yes or a no or an on or off switch. It’s more like a measure along a spectrum. And we’re giving more and more autonomy to our systems. A sort of good human parallel to autonomy is maturity. How much of a leash does that parent let that kid operate under? And what we’re seeing with our robotics is that we’ve gone from humans involved in the complete operation of everything that they do to really loosening that leash. The Global Hawk that I talked about earlier is one illustration.

Warfare is going open source.

Another one is the C-RAM — the Counter-Rocket Artillery Mortar System. The system looks a little bit like R2-D2 if R2-D2 had a 20 millimeter machine gun cannon mounted on him. It’s used to shoot down incoming artillery and rockets in places like the green zone in Iraq. The system — the human turns it on and off. But during its operations, the human role is non-essential. When you have an incoming rocket, humans can basically get to mid-curse word — “Oh, cra…” That’s about it. That’s about the amount of time we have to respond. So because of the very speed and nature of war, the computer has to respond without our authorization or interference. So we are a part of the loop in one sense, but we really aren’t in another sense.

C-RAMWhat I get from it is — we’ve re-defined our sense of control, our sense of what it means for something to be autonomous or not.

You know, there’s the great descriptor in the book from a bomber pilot of the dropping of a bomb. He’s a pilot. He’s in charge of that plane. But the way he described dropping the bomb was, “The computer opened the bomb bay doors, and sent the weapon out into the dark.” He wasn’t pressing the pickle button, like we see in the World War II movies. He was along for the ride… for the computer to decide when to drop it or when to not. Now, he could’ve stopped it. He’s still in charge. But you get where I’m talking about here.

And we’re giving more and more autonomy to the robots in terms of things like target-recognition software. These are the counter-sniper devices that basically — when a sniper shoots at a soldier, it targets that sniper’s location in an instant. So we are not yet at the point in the Terminator movies where you’ve got robots out there completely operating on their own, making all their own decisions. But we’re certainly already in a space where few imagined we would be. And then, secondly, we are working on those more Terminator-like systems.

h+: There’s some discussion that a strong AI system could actually make better decisions, even better ethical decisions, than governments and generals and soldiers.. Did you run into any of that kind of discussion while you were working on your book?

PWS: There is some, but it’s very stove-piped. You have people working in the field who talk about that, but military ethicists aren’t dealing with it. So you have this stove pipe. And that’s one of the points of the book — to try and break these stove pipes down and say, “These things aren’t mere science fiction any more. They’re real. They need to be engaged on. And they can’t just be engaged in by just the roboticists or just the military.”

Secondly, there is a somewhat inherent flaw in these discussions among the roboticists, when you sort of peel them back a little bit. They argue, yes… they can be more moral; they can be more ethical. But when you actually explore what does it mean to be ethical, what does it mean to be moral, you can’t have a machine that’s moral. Our very meaning of morality is something that’s very much linked to humans. The same thing with ethics. You can try to program a robot to follow a set of guidelines that we have defined as moral or ethical, but it doesn’t make the machine moral or ethical.

The more important part of that problem is that when you look at their sort of descriptions of how that would be accomplished, it’s a black box. They say, “Well you could have the robot with the ethics black box.” So you ask, “Okay, well what does that mean?” And they might say, “Well, it’s just a little black box in the diagrams.” It’s sort of like saying, you know, “Oh, well it’ll have the emotion box.” Well, how do you build it? What…

h+: …but if you get to the level of artificial intelligence where it’s approaching the sort of intelligence that Kurzweil talks about, couldn’t you have a bot being asked by generals to participate in a war. And the bot thinks about it and says (imitating scene from 2001 A Space Odyssey), “I can’t do that, Dave.”

PWS: Yeah, in theory. But when you look at their outlines for it, it’s still a black box. It’s sort of like, “I’m going to have the car that produces no bad emissions because it’s powered by the cold fusion device.” And you say, “Oh, well… what’s your design for the cold fusion device?” “Well, it’ll be the cold fusion device.” Do you get what I’m saying?

h: Sure. The speculation has nothing to do with the path we’re actually currently on with war bots.

PWS: So it’s a challenge. And you hear some people say, “Oh, you could program the machine to follow this set of rules, and it’s more likely to follow the rules than a soldier because it’s not driven by emotion.” Soldiers are also supposed to be programmed to follow the Geneva conventions, but their emotions take them over. A buddy gets killed and they get angry. They commit a crime of rage or revenge. And a robot is emotionless, so it’ll always follow those set of rules.

Well, yes, a robot is emotionless. But it also has no sense of empathy… no sense of guilt. It will see an 80-year-old grandmother in her wheelchair in the very same way it sees a T-80 tank. They’re both just 0s and 1s in the programming language.

Finally, we’re living in an era where the application of the laws of war aren’t clear-cut. They’re difficult. You know, it’s not simple, like “Never shoot at noncombatants.” It’s “What do I do in a situation where I have a terrorist leader shooting at me from a house that also has women and children in it? What if I have an insurgent group that’s using an ambulance and I’m not supposed to shoot at it because it’s got a red cross on the side of it, but they’re using that ambulance to move explosives around.” These are really tough questions that we could spend lengthy periods debating, and we would never come to a resolution. The computer isn’t going to be able to resolve those kinds of questions any time soon.

h+: I wanted to ask you about these robot systems escaping the military and getting into private hands. You had an interesting part of the book there with sort of a militia-type border patrol group that has drones.

PWS: Yes.

h+: So I’m picturing drug gangs and terrorist groups… angry high school students or whatever… you know, everybody having their own war bots. Are we moving towards that kind of situation?

PW SingerPWS: The way I think about it is this — just like software, warfare is going open source. That is, we’re starting to use more and more systems that are commercial, off-the-shelf — some of it is even DIY. You can build your own version of the Raven drone, which is a widely used military drone, for about $1,000 dollars. So we have a flattening of the landscape of war and technology that is just like what happened in software. A wide variety of actors can utilize these systems.

You know, it’s not just the major states that are using military robotics. There are 43 countries working on them — rich states, poor states, big states, small states, you name it. Everyone from the U.S. to Pakistan and Iran to Belarus and on and on. And it’s the same thing with the non-state actors. And the non-state actors range from Hezbollah to this militia group in Arizona to a bunch of college kids at Swarthmore. It widens the landscape of who can play in war. That’s a pretty disturbing factor. One person’s hobby — such as the hobbyist who flew a homemade drone from North America to Great Britain — can be another person’s terrorist strike option.

Leave a Reply

https://phuonghoangschool.com/wp-includes/nexus-slot/