Teaching Robots the Rules of War

Military robots were once again in the news this month with headlines like "Corpse-eating robot" (Wired), "Robot… to eat all life on Earth" (The Register), and "Sniper Bot Refuels by consuming human bodies" (OhGizmo!). Such death-dealing robotic zombies would certainly seem to make the Governator-style T-800 Terminators seem like tame puppies.

A subsequent press release from the manufacturers, Cyclone Power Technologies and Robotic Technology Inc (RTI), set the record straight –- these robots will be strictly vegetarian. Cyclone announced that it had completed the first stage of development for a beta biomass engine system used to power RTI’s Energetically Autonomous Tactical Robot (EATR™) –- a most unfortunate choice of name for a grass eater.

The press release goes on to point out that, “Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI.” But how are these autonomous robots to know the difference between battlefield dead and “twigs, grass clippings and wood chips?”

Ronald C. Arkin, director of the Mobile Robot Laboratory at Georgia Tech, may have the answer. He is designing ethical guidance software for battlefield robots under contract with the U.S. Army.

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” says Dr. Arkin. “That’s the case I make.”

Analogous to missile guidance systems that need the use of radar and a radio or a wired link between the control point and the missile, Arkin’s “ethical controller” is a software architecture that provides, “ethical control and a reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Geneva Conventions, the Laws of War, and the Rules of Engagement.”

Rather than guiding a missile to its intended target, Arkin’s robotic guidance system is being designed to reduce the need for humans in harm’s way, "… appropriately designed military robots will be better able to avoid civilian casualties than existing human war fighters and might therefore make future wars more ethical." Here’s a video of what such a future might look like:

As reported in a recent New York Times article, Dr. Arkin describes some of the potential benefits of autonomous fighting robots. They can be designed without a sense of self-preservation and, as a result, “no tendency to lash out in fear.” They can be built without anger or recklessness and they can be made invulnerable to what he calls “the psychological problem of ‘scenario fulfillment,’ ” that causes people to absorb new information more easily if it matches their pre-existing ideas.

The SF writer Isaac Asimov first introduced the notion of ethical rules for robots in his 1942 short story "Runaround.” His famous Three Laws of Robotics state the following:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Laws of War (LOW) and Rules of Engagement (ROE) make programming robots to adhere to Asimov’s Laws far from simple. You want the robots to protect the friendly and “neutralize” enemy combatants. This likely means harming human beings on the battlefield.

In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” –- in consultation with humans on the battlefield. These scenarios include ROE and LOW adherence (Taliban and Iraq), discrimination (Korean DMZ), and proportionality and tactics (urban sniper).

Arkin’s “rules” end up altering Asimov’s rules to look more like these:

  1. Engage and neutralize targets as combatants according to the ROE.
  2. Return fire with fire proportionately.
  3. Minimize collateral damage — intentionally minimize harm to noncombatants.
  4. If uncertain, invoke tactical maneuvers to reassess combatant status.
  5. Recognize surrender and hold POW until captured by human forces.

Dr. Arkin and his colleagues at Georgia Tech have developed a “proof-of-concept” prototype ethical governor. His software architecture is likely years away from use on the battlefield.

h+: Some researchers assert that no robots or AI systems will be able to discriminate between a combatant and an innocent, that this sensing ability currently just does not exist. Do you think this is just a short-term technology limitation? What such technological assumptions do you make in the design of your ethical governor?

RA: I agree this discrimination technology does not effectively exist today, nor is it intended that these systems should be fielded in current conflicts. These are for the so-called war after next, and the DoD would need to conduct extensive additional research in order to develop the accompanying technology to support the proof-of-concept work I have developed. But I don’t believe there is any fundamental scientific limitation to achieving the goal of these machines being able to discriminate better than humans can in the fog of war, again in tightly specified situations. This is the benchmark that I use, rather than perfection. But if that standard is achieved, it can succeed in reducing noncombatant casualties and thus is a goal worth pursuing in my estimation.

Robots in military line up

h+: How does the process of introducing moral robots onto the battlefield get bootstrapped and field tested to avoid serious and potentially lethal "glitches" in the initial versions of the ethical governor? What safeguards should be in place to prevent accidental war?

RA: Verification and validation of software and systems is an integral part of any new battlefield system. It certainly must be adhered to for moral robots as well. What exactly the metrics are and how they can be measured for ethical interactions during the course of battle is no doubt challenging, but one I feel can be met if properly studied. It likely would involve the military’s battle labs, field experiments, and force-on-force exercises to evaluate the effectiveness of the ethical constraints on these systems prior to their deployment, which is fairly standard practice. The goal is not to erode mission effectiveness, while reducing collateral damage.

A harder problem is managing the changes and tactics that an intelligent adaptive enemy would use in response to the development of these systems… to avoid spoofing and ruses that could take advantage of these ethical restraints in a range of situations. This can be minimized, I believe, by the use of bounded morality –- limiting their deployment to narrow, tightly prescribed situations, and not for the full spectrum of combat.

h+: Do you envision robots ever disobeying military orders on the battlefield to "do the right thing?" If so, under what circumstances?

RA: Asimov originated the use of ethical restraint in robots many years ago and presented all the quandaries that it can generate. In our prototype ethical governor (and in the design itself) we do provide the robot with the right to refuse an order it deems unethical. It must provide some explanation as to why it has refused such an order. With some reluctance, we have engineered a human override capability into the system, but one which forces the operator to explicitly assume responsibility for any ethical infractions that might result as a consequence of such an override.

h+: What do you see as the implications for policy proliferation, verification and enforcement of an arms control strategy involving robotic combatants? Do you think increased use of robotics in the battlefield will start a new arms race?

We do provide the robot with the right to refuse an order it deems unethical.

RA: Proliferation is a real possibility, as much of the technology at the entry levels is quite inexpensive (for example, Radio Shack/hobby shop technology). Hezbollah used armed drones in the Israeli-Lebanon conflict not long ago (unsuccessfully). But the unmanned systems are not weapons of mass destruction, and are more tactical rather than strategic. So from my point of view, the dangers posed are not unlike those for any new battlefield advantage, whether it be gunpowder, the crossbow, or other similar inventions over the centuries. Nonetheless I think it is essential that international discussions should be held regarding what is acceptable/unacceptable regarding the use of armed unmanned systems in an early stage in their development (that is… now).

h+: Do you ever foresee a scenario where both sides in a conflict are strictly robotic?

RA: Not really. I view these unmanned systems as highly specialized assets that will be working alongside our troops, not directly replacing them. They will conduct specialized operations (for example, building clearing, counter sniper operations, and so forth) that will provide an asymmetric advantage to our war fighters. A human presence on the battlefield will be maintained, and some would argue that it must be so for a range of reasons.

h+: How quickly do you expect the rollout of robots with ethical governors?

RA: I’m not sure if there will ever be a rollout… we have just developed a proof-of-concept, and significant additional research by many other groups will be needed to work out problems of which we have just scratched the surface. I often said these are just baby steps towards the goal of unmanned systems being able to outperform human soldiers from an ethical standpoint, and not simply from the metric of a body count. But if this goal is pursued, it is possible that within one to two decades some aspects of this research may bear fruit in the battlefield.

Dr. Ronald ArkinRonald C. Arkin received a B.S. Degree from the University of Michigan, an M.S. Degree from Stevens Institute of Technology, and a Ph.D. in Computer Science from the University of Massachusetts, Amherst in 1987. He then assumed the position of Assistant Professor in the College of Computing at the Georgia Institute of Technology where he now holds the rank of Regents’ Professor and is the Director of the Mobile Robot Laboratory. He also serves as the Associate Dean for Research in the College of Computing at Georgia Tech since October 2008.

You may also like...

13 Responses

  1. Anonymous says:

    i think thay should take the human anatomy and design a robot after it. 0 i forgot we already have a design like that think people think ok i know people that like robots and won’t to put a gun in there hands are really only thinking of one design yes one design stands out above all ladies and gentleman and it’s the one there all afraid might get a bad reaction from the public….. but really deep down we all want to see it….we all want to see a fully functional….terminator T800! so please stop beating around the bush and come out with it already.

  2. Anonymous says:

    Var, this artical is worth reading just for the comments! Intelligent stuff, but it would be good if people were abit thicker skinned when it comes to these kinds of divisive discussions.

  3. odenskrigare says:

    I strongly disagree with using robots for war. If you’re going to kill someone, if you feel strongly enough about your position to want to kill someone who disagrees, at least have the courage to face them when you do it.

    Since when is war necessarily about “courage”? More than anything else, especially now, it’s about effectiveness. This chivalric idea of yours is a bit quaint.

  4. Ld Elon says:

    The first rules flawed, its designed to kill an enermie combatant, whom would be human.

  5. Stew says:

    I think these robots are great, why all the cynicism?
    We came from a crap shoot chemistry experiment and billions of years of being in the right place at the right time, or magic, so what do you expect? I think we’re fine considering the source. Besides, name one thing that has ever gone wrong with technological advancement.

    As far as humans designing ethics for AI capable of killing? It’s like grandpa used to say, “if you can teach a puppy to pee outside, then you can teach robots how to kill with the highest of moral standards”.

    War, illegal military ops, etc., accelerates technological advancement. Army medics are amazingly resourceful people. So the sooner we start blowing up stuff with this technology, ya know, beta testing, the sooner I own Robot Buddy 1.0. Im just sayin.

    -Stew

  6. As unnerving as is the idea of intentionally equipping hardware with superhuman lethal force, my concern is not over how well the ethics framework will perform. It’s a given that this technology will periodically fail. There will be bugs, glitches, hacks, design errors, and malfunctions with ramifications proportional to the scale of the deployment, just as there are in every sufficiently advanced technological system. That eventuality is as certain as the human tragedies that will result from those inescapable failures. In this case, instead of losing a server, we’ll be losing innocent lives. I suppose there is some comfort in the idea that accidental casualties can be more easily mitigated when they are caused by expectable calculation errors rather than messy psychological ones; although, it is a very small comfort.

    As saddened and disappointed as I am to see brilliant scientists engaged in the development of weaponry, even with the genuinely noble intent of reducing death, I’m not especially bothered by Dr. Arkin’s choice of work either. I am sure that he believes himself to be proactively doing good for humanity, and his rationale is no doubt well thought out and sincere. Ultimately, however, the aim of those involved doesn’t matter. Once available, the technology will be used and adapted for the purposes of those wielding it, and we’ll have our outcome regardless of the original intent. It would be foolish to expect otherwise.

    My real concern with this endeavor actually has very little to do with the specifics of the technology itself. I find highly troubling the implicit assumption that it’s a good idea for people to at once relegate their complex ethical decision-making to machines while being taught to trust those machines as infallible. The propensity of people to absolutely trust the domain authority of a machine system is bad enough even when there are definite right and wrong answers and the operator has some ability to detect an error condition. There are plenty of very unfortunate examples of people trusting faulty machines in the face of hard, machine-incriminating evidence. How much more of a risk is there for misplaced trust in this case, where no true right or wrong answer may exist? People argue amongst themselves over ethical quandaries all the time, and surely a panel of ethicists won’t be deployed to monitor each operation. So even if the operators are trained to accept the possibility of machine error, how will they know that a true system error has occurred when there isn’t a definitive answer in the first place? Would they dare question the output and risk being seen as unethical themselves? The possibilities of very bad outcomes are exacerbated by the nature of the system. Potentially a lot of human lives are at risk here.

    I have not yet read Dr. Arkin’s book, and I hope that he, or someone, has thoroughly addressed these concerns.

Leave a Reply