Sign In

Remember Me

Teaching Robots the Rules of War

Military robots were once again in the news this month with headlines like "Corpse-eating robot" (Wired), "Robot… to eat all life on Earth" (The Register), and "Sniper Bot Refuels by consuming human bodies" (OhGizmo!). Such death-dealing robotic zombies would certainly seem to make the Governator-style T-800 Terminators seem like tame puppies.

A subsequent press release from the manufacturers, Cyclone Power Technologies and Robotic Technology Inc (RTI), set the record straight –- these robots will be strictly vegetarian. Cyclone announced that it had completed the first stage of development for a beta biomass engine system used to power RTI’s Energetically Autonomous Tactical Robot (EATR™) –- a most unfortunate choice of name for a grass eater.

The press release goes on to point out that, “Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI.” But how are these autonomous robots to know the difference between battlefield dead and “twigs, grass clippings and wood chips?”

Ronald C. Arkin, director of the Mobile Robot Laboratory at Georgia Tech, may have the answer. He is designing ethical guidance software for battlefield robots under contract with the U.S. Army.

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” says Dr. Arkin. “That’s the case I make.”

Analogous to missile guidance systems that need the use of radar and a radio or a wired link between the control point and the missile, Arkin’s “ethical controller” is a software architecture that provides, “ethical control and a reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Geneva Conventions, the Laws of War, and the Rules of Engagement.”

Rather than guiding a missile to its intended target, Arkin’s robotic guidance system is being designed to reduce the need for humans in harm’s way, "… appropriately designed military robots will be better able to avoid civilian casualties than existing human war fighters and might therefore make future wars more ethical." Here’s a video of what such a future might look like:

As reported in a recent New York Times article, Dr. Arkin describes some of the potential benefits of autonomous fighting robots. They can be designed without a sense of self-preservation and, as a result, “no tendency to lash out in fear.” They can be built without anger or recklessness and they can be made invulnerable to what he calls “the psychological problem of ‘scenario fulfillment,’ ” that causes people to absorb new information more easily if it matches their pre-existing ideas.

The SF writer Isaac Asimov first introduced the notion of ethical rules for robots in his 1942 short story "Runaround.” His famous Three Laws of Robotics state the following:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Laws of War (LOW) and Rules of Engagement (ROE) make programming robots to adhere to Asimov’s Laws far from simple. You want the robots to protect the friendly and “neutralize” enemy combatants. This likely means harming human beings on the battlefield.

In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” –- in consultation with humans on the battlefield. These scenarios include ROE and LOW adherence (Taliban and Iraq), discrimination (Korean DMZ), and proportionality and tactics (urban sniper).

Arkin’s “rules” end up altering Asimov’s rules to look more like these:

  1. Engage and neutralize targets as combatants according to the ROE.
  2. Return fire with fire proportionately.
  3. Minimize collateral damage — intentionally minimize harm to noncombatants.
  4. If uncertain, invoke tactical maneuvers to reassess combatant status.
  5. Recognize surrender and hold POW until captured by human forces.

Dr. Arkin and his colleagues at Georgia Tech have developed a “proof-of-concept” prototype ethical governor. His software architecture is likely years away from use on the battlefield.

h+: Some researchers assert that no robots or AI systems will be able to discriminate between a combatant and an innocent, that this sensing ability currently just does not exist. Do you think this is just a short-term technology limitation? What such technological assumptions do you make in the design of your ethical governor?

RA: I agree this discrimination technology does not effectively exist today, nor is it intended that these systems should be fielded in current conflicts. These are for the so-called war after next, and the DoD would need to conduct extensive additional research in order to develop the accompanying technology to support the proof-of-concept work I have developed. But I don’t believe there is any fundamental scientific limitation to achieving the goal of these machines being able to discriminate better than humans can in the fog of war, again in tightly specified situations. This is the benchmark that I use, rather than perfection. But if that standard is achieved, it can succeed in reducing noncombatant casualties and thus is a goal worth pursuing in my estimation.

Robots in military line up

h+: How does the process of introducing moral robots onto the battlefield get bootstrapped and field tested to avoid serious and potentially lethal "glitches" in the initial versions of the ethical governor? What safeguards should be in place to prevent accidental war?

RA: Verification and validation of software and systems is an integral part of any new battlefield system. It certainly must be adhered to for moral robots as well. What exactly the metrics are and how they can be measured for ethical interactions during the course of battle is no doubt challenging, but one I feel can be met if properly studied. It likely would involve the military’s battle labs, field experiments, and force-on-force exercises to evaluate the effectiveness of the ethical constraints on these systems prior to their deployment, which is fairly standard practice. The goal is not to erode mission effectiveness, while reducing collateral damage.

A harder problem is managing the changes and tactics that an intelligent adaptive enemy would use in response to the development of these systems… to avoid spoofing and ruses that could take advantage of these ethical restraints in a range of situations. This can be minimized, I believe, by the use of bounded morality –- limiting their deployment to narrow, tightly prescribed situations, and not for the full spectrum of combat.

h+: Do you envision robots ever disobeying military orders on the battlefield to "do the right thing?" If so, under what circumstances?

RA: Asimov originated the use of ethical restraint in robots many years ago and presented all the quandaries that it can generate. In our prototype ethical governor (and in the design itself) we do provide the robot with the right to refuse an order it deems unethical. It must provide some explanation as to why it has refused such an order. With some reluctance, we have engineered a human override capability into the system, but one which forces the operator to explicitly assume responsibility for any ethical infractions that might result as a consequence of such an override.

h+: What do you see as the implications for policy proliferation, verification and enforcement of an arms control strategy involving robotic combatants? Do you think increased use of robotics in the battlefield will start a new arms race?

We do provide the robot with the right to refuse an order it deems unethical.

RA: Proliferation is a real possibility, as much of the technology at the entry levels is quite inexpensive (for example, Radio Shack/hobby shop technology). Hezbollah used armed drones in the Israeli-Lebanon conflict not long ago (unsuccessfully). But the unmanned systems are not weapons of mass destruction, and are more tactical rather than strategic. So from my point of view, the dangers posed are not unlike those for any new battlefield advantage, whether it be gunpowder, the crossbow, or other similar inventions over the centuries. Nonetheless I think it is essential that international discussions should be held regarding what is acceptable/unacceptable regarding the use of armed unmanned systems in an early stage in their development (that is… now).

h+: Do you ever foresee a scenario where both sides in a conflict are strictly robotic?

RA: Not really. I view these unmanned systems as highly specialized assets that will be working alongside our troops, not directly replacing them. They will conduct specialized operations (for example, building clearing, counter sniper operations, and so forth) that will provide an asymmetric advantage to our war fighters. A human presence on the battlefield will be maintained, and some would argue that it must be so for a range of reasons.

h+: How quickly do you expect the rollout of robots with ethical governors?

RA: I’m not sure if there will ever be a rollout… we have just developed a proof-of-concept, and significant additional research by many other groups will be needed to work out problems of which we have just scratched the surface. I often said these are just baby steps towards the goal of unmanned systems being able to outperform human soldiers from an ethical standpoint, and not simply from the metric of a body count. But if this goal is pursued, it is possible that within one to two decades some aspects of this research may bear fruit in the battlefield.

Dr. Ronald ArkinRonald C. Arkin received a B.S. Degree from the University of Michigan, an M.S. Degree from Stevens Institute of Technology, and a Ph.D. in Computer Science from the University of Massachusetts, Amherst in 1987. He then assumed the position of Assistant Professor in the College of Computing at the Georgia Institute of Technology where he now holds the rank of Regents’ Professor and is the Director of the Mobile Robot Laboratory. He also serves as the Associate Dean for Research in the College of Computing at Georgia Tech since October 2008.