13 April 2018

Autonomous weapons and the law: the Yale and Brookings discussions

BY CHARLIE DUNLAP, J.D.

One of the hottest topics these days in the law of war is the increasing autonomy in weaponry. We are not yet seeing (and may never see) the emergence of a “Terminator” robot, but there are still plenty of complex issues to discuss. In anticipation of this week’s meeting at the UN of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), events were held at Yale and Brookings last week. I was privileged to participate in them, and here are some observations about those discussions. At Yale Law School, the dialog was cast as a debate between my friend Professor Rebecca Crootof and myself entitled “Killer Robots: Is Existing Law Sufficient? A Debate on How Best to Regulate Autonomous Weapons.” Professor Crootof has a new paper, entitled “Autonomous Weapon Systems and the Limits of Analogy,” in which she argues that analogies often made to “weapons already in use” as well as “analogies based on unconventional entities that participate in armed conflict—namely, child soldiers and animal combatants” do not work for autonomous weapons.

My long-standing view is that the best way to regulate any weapon (to include autonomous and other high-tech weapons) is by insisting that it strictly adhere to the existing law of war (as opposed to trying to create a specialized legal regime for every new technology that appears). Any technologically-specific legal regime inevitably captures the technology at a specific “snapshot” in time, and this can cause unintended and even counterproductive consequences as science advances.

That the existing law of war can accommodate autonomous weapons is a position for which I think there is now almost “universal consensus.” However, as the debate proceeded it became apparent that Professor Crootof was not really calling for an abandonment of the entire corpus of the law of war. Instead, it seems that her concern focuses on developing norms for the testing and evaluation of autonomous weapon, as well as those norms applicable to the state responsibility doctrine as to culpability when an autonomous weapon causes unintended and unexpected harm. With her concept cabined in that way, we found much agreement.

Accountability

Along that line one of the issues that arose at Yale (and Brookings) was the notion of personal (as opposed to state) accountability for acts done by autonomous weapons that might violate the law of war. As I’ve written elsewhere (“Accountability and Autonomous Weapons: Much Ado About Nothing?”) the ability to hold individuals criminally accountable is not a prerequisite for finding a weapon lawful in accordance with Article 36, of Protocol 1 to the Geneva Conventions. That said, I argued in that article that there are several ways to hold people accountable with the fundamental proposition being that whoever activates the autonomous system must have a reasonable understanding of it and must be able to reasonably anticipate that, under the circumstances, the weapon will operate in compliance with the law of war.

Jens David Ohlin at Cornell Law has written an excellent and thoughtful article (“The Combatant’s Stance: Autonomous Weapons on the Battlefield) that “concludes that there is one area where international criminal law is ill suited to dealing with a military commander’s responsibility for unleashing” an autonomous weapon. Ohlin correctly predicts that many cases “will be based on the commander’s recklessness and unfortunately international criminal law has struggled to develop a coherent theoretical and practical program for prosecuting crimes of recklessness.” While I do not question Professor Olin’s conclusions about the international law precedents he examines, I would offer that there could also be potential accountability for individuals who fail to act “reasonably” in the employment of autonomous weapons.

“Reasonable Military Commander”

In a important new article (“Proportionality Under International Humanitarian Law: The “Reasonable Military Commander” Standard and Reverberating Effects”) Ian Henderson and Kate Reece do not address autonomous weapons per se, but rather set forth the well-established law of war rules as the principle of proportionality, and discuss how it should be applied. As to context, they explain that:

“The principle of proportionality protects civilians and civilian objects against expected incidental harm from an attack that is excessive to the military advantage anticipated from the attack. Military commanders are prohibited from planning or executing such indiscriminate attacks. The principle of proportionality is accepted as a norm of customary international law applicable in both international and non-international armed conflict. The test for proportionality has been codified in Additional Protocol I.”

The relevant provisions of Additional Protocol I prohibit attacks that: “may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.” (Citations omitted.)

A key aspect of Henderson’s and Reece’s article is that they examine the standard to be used in judging the attacker’s compliance with the principle of proportionality. If an attacker fails to meet the standard, the attack could be considered an indiscriminate one, and that can amount to a “grave breach” of the law of war.

Since the rules require determining relative values (e.g., what is “excessive in relation to the concrete and direct military advantage anticipated”), Henderson and Reece conclude that the current international law standard for assessing those value determinations is that of a “reasonable military commander.” (In the case of a civilian employing the autonomous weapon, it would be “a person with all the experience, training, and understanding of military operations that is vested in a “reasonable military commander””).

Henderson and Reece point out that in the Galić case the International Criminal Tribunal for the former Yugoslavia (ICTY) noted that in “determining whether an attack was proportionate it is necessary to examine whether a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.” (The trial court in Galić found that “certain apparently disproportionate attacks may give rise to the inference that civilians were actually the object of attack” and that this “is to be determined on a case by-case basis in light of the available evidence.”)

In other words, an attacker acting unreasonably in his or her use of autonomous weapons that cause, for example, excessive civilian casualties may be criminally culpable – and this may help mitigate if not obviate Professor Ohlin’s concerns.

The event at Brookings was the fifth annual Justice Stephen Breyer lecture, and this year’s addressed “Autonomous weapons and international law.” The lecturer was Notre Dame Law’s Mary Ellen O’Connell, and the discussants were Jeroen van den Hovenand myself – with Brookings’ Ted Piconne moderating. The entire discussion is on video found here (my remarks start at about the 51:48 minute point).

I discussed some of the same issues as at Yale, but especially focused on the challenges associated with fullyautonomous weapons which do not yet exist but could be developed in the coming years. I define these weapons as systems with a “machine-learning” capability supported by artificial neural networks.

DoD Directive 3000.09 defines “autonomous” weapon systems as those that “once activated, can select and engage targets without further intervention by a human operator.” DoD also insists that “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” (Italics added.) The challenge is how to you engineer into a “machine-learning” system an “appropriate” level of human judgment.

Semi-autonomous weapons (defined by DoD as a “weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator”) do exist. The Phalanx Close-In Weapons System (CIWS), which has been in service since 1980, is one example. It’s described as an “entirely self-contained unit, the mounting houses the gun, an automated fire-control system and all other major components, enabling it to automatically search for, detect, track, engage, and confirm kills using its computer-controlled radar system.” However, it is still human-supervised, and can only attack specific target groups (e.g., missiles, boats, and planes) identified by a human operator.

Obviously, fully autonomous weapons will require a lot of scenario testing before deployment. This is because however and whatever it “learns” must result in an application of force consonant with the law of war to the same degree (or better) than a fully human system. Indeed, many experts believe there is great potential for autonomous systems to be more precise with the use of force and, therefore, more protective of civilians not only because of potentially superior sensors, but also because they don’t suffer the fatigue and nefarious emotions that can distort human judgement.

Regardless, prior to their deployment it must be demonstrated that a particular autonomous weapons can consistently operate lawfully – a clearly difficult task, especially in the case of machine-learning systems. However, private enterprise might be helpful in developing the necessary analysis and validation protocols. Keep on mind that machine-learning devices enabled by artificial neural networks will hardly be confined to weaponry. Rather, they will someday be found in many different kinds of civilian products.

For this reason, I’m convinced that industry will need to develop the kind of sophisticated and robust evaluation process that these autonomous systems will require in order to be confident that they will do what you want them to do. (If industry needs any prompting, the plaintiff’s bar could incentivize.) I believe that what is learned in the private sector about controlling the risk occasioned by machine-learning devices could have utility in evaluating advanced autonomous weaponry.

I reiterated my view that it’s important that autonomous weapons’ systems be governed by the existing law of armed conflict. I’m concerned that too many actors would want to believe there was a lacunae in the law with respect to these weapons.

After all, China “has declined to clarify how and whether it believes the international law governing the use of [force] applies to cyber warfare.” Consider as well that the chief of the Russian General Staff Gen. Valery Gerasimov said that in a future war, the “objects of the economy and the state administration of the enemy will be subject to immediate destruction.” Either Gerasimov is unaware of the law of war principle of distinction, or chooses to ignore it. In any event, now is not the time to tell the international community that existing law cannot accommodate autonomous weapons.

Finally, I think it’s important to keep in mind that barring the use of a weapon does not necessarily lead to fewer civilian losses. (“The Moral Hazard of Inaction in War”).


The UN Meeting


According to the UN, the meeting this week will address “overarching issues in the area” of autonomous weapons including: 

Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW; 

Further consideration of the human element in the use of lethal force; aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems; 

Review of potential military applications of related technologies in the context of the Group’s work; 

Possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention without prejudging policy outcomes and taking into account past, present and future proposals. 

Personally, I don’t expect that anything dramatic will come out of the meeting in terms of a substantive agreement that includes the major warfighting states. I just don’t know that the technology is far enough advanced or understood clearly enough to expect a significant accord to be forthcoming I’m the near future. However, the discussions may help to begin to evolve norms – particularly with respect to testing and evaluation – that could facilitate shaping the legal environment for these weapons which, in my view, are inevitable.

Hyperwar and autonomous weaponry

Why “inevitable”? The emergence of what is known as hyperwar demands a speed of decision-making that in many instances only machines can accomplish. Retired Marine General John Allen and Amir Husein explained in Proceedings last year (“On Hyperwar”) that:

Until the present time, a decision to act depended on human cognition. With autonomous decision making, this will not be the case. While human decision making is potent, it also has limitations in terms of speed, attention, and diligence. For example, there is a limit to how quickly humans can arrive at a decision, and there is no avoiding the “cognitive burden” of making each decision. There is a limit to how fast and how many decisions can be made before a human requires rest and replenishment to restore higher cognitive faculties. 

This phenomenon has been studied in detail by psychologist Daniel Kahneman, who showed that a simple factor such as the lack of glucose could cause judges—expert decision makers—to incorrectly adjudicate appeals. Tired brains cannot carefully deliberate; instead, they revert to instinctive “fast thinking,” creating the potential for error. Machines do not suffer from these limitations. And to the extent that machine intelligence is embodied as easily replicated software, often running on inexpensive hardware, it can be deployed at scales sufficient to essentially enable an infinite supply of tactical, operational, and strategic decision making.

The warfighting advantage that such speed provides will prove irresistible to militaries around the globe. Last January the Economist reported that James Miller, the former Under-Secretary of Defense for Policy at the Pentagon, “says that although America will try to keep a human in or on the loop, adversaries may not.” According to Miller, such adversaries might “decide on pre-delegated decision-making at hyper-speed if their command-and-control nodes are attacked.” Moreover, he “thinks that if autonomous systems are operating in highly contested space, the temptation to let the machine take over will become overwhelming.”

Concluding thoughts

Accordingly, I believe efforts to ban autonomous weaponry are profoundly ill-conceived and, frankly, pointless. Let’s begin not with trying to emplace a prohibition, but rather with efforts to develop engineering and testing norms that will enable these systems to be used in conformance with the law of war – at least as effectively as humans use weapons. Put another way, if we are to preserve our freedom, we have to be prepared to meet the future, and that future certainly includes autonomous weapons.

Bonus: What should you be reading about this topic?

Lt Col Chris Ford has a terrific new article (“Autonomous Weapons and International Law”) which addresses a range of legal issues associated with autonomous weapons. Also, I think this new monograph (“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”) by a consortium of think tanks provides a lot of context by addressing the variety of challenges raised by artificial intelligence across society.

Finally, there is Paul Scharre’s soon-to-be-released book, Army of None: Autonomous Weapons and the Future of War, which is sure to become a “must have” (and I’ll be reviewing it in a future post).

As we like to say on Lawfire, check the facts, assess the law, and make your own decision!

No comments: