15 September 2019

Guiding the Unknown: Ethical Oversight of Artificial Intelligence for Autonomous Weapon Capabilities

By Gretchen Nutz

It is not news that autonomous weapons capabilities powered by artificial intelligence are evolving fast. Many scholars and strategists foresee this new technology changing the character of war and challenging existing frameworks for thinking about just or ethical war in ways the U.S. national security community is not yet prepared to handle. Until U.S. policy makers know enough to draw realistic ethical boundaries, prudent U.S. policy makers are likely to focus on measures that balance competing obligations and pressures during this ambiguous development phase.

On the one hand, leaders have an ethical responsibility to prepare for potential threats from near-peer competitors such as China. But leaders also face a competing obligation to ensure increasingly autonomous systems do not spark or escalate an unnecessary conflict that would violate Americans’ understanding of the appropriate use of force. The following hypothetical scenarios illustrate some of the competing obligations and pressures U.S. technology experts suggest national security leaders must balance and address now.[1] 

Scenario 1: China Attacks With Weaponized Robots


The Pentagon recently released an artificial intelligence strategy that warns China and Russia are accelerating investments in this technology that could “erode our technological and operational advantages” if U.S. innovation cannot also expedite efforts to refine and deploy artificial intelligence.[2] For example, Elsa Kania, a scholar with the Center for a New American Security who studies the Chinese use of artificial intelligence, warns that China’s military may someday deploy its artificial intelligence technologies in swarms of drones which could potentially “target and saturate the defenses of U.S. aircraft carriers."[3] Kania notes improvements in this technology will likely allow drones to coordinate with each other autonomously and adapt to changes in their environment, enabling them to outmaneuver or negate countermeasures. U.S. policy makers therefore face pressure to accelerate the adoption of artificial intelligence technologies to ensure that defensive measures can overcome the advantages enjoyed by an adversary’s more autonomous systems which could be deployed in a future wartime scenario.

Brian Michelson, a U.S. Army colonel who has studied and written on the military applications of artificial intelligence, argues the debate over autonomy in weapons should include the question, “Would it be moral to cause more casualties for our forces by overly limiting our weapons capabilities?”[4] Extreme caution or risk aversion is not necessarily the safe ethical or moral choice.
Scenario 2: Algorithms Trigger a Flash War 

In 2010, the stock market suffered a trillion dollar “flash crash” that illustrates the difficulties and associated risks that occur when algorithms interpret chaotic, real-world data. During that event, regulators assessed that algorithms in automated trading software began rapidly selling stock futures and then responded to the uptick in activity they themselves generated by selling more on a day when external factors like a European debt crisis made the market particularly sensitive.[5] Fortunately, the market recovered most of the subsequent one trillion dollar loss, which CNN Money said was the “largest one-day drop on record.”[6] A weapons system that similarly misreads data could trigger escalatory violence, tragically violating Just War and other international norms on appropriate reasons and methods for using military force. Think, for example, of the 1988 incident where a U.S. guided missile cruiser shot down an Iranian passenger jet the ship’s Aegis targeting system misidentified as a military aircraft.[7] Such an error is harder to reverse than the stock market loss.

These hypotheticals underscore competing obligations to develop autonomous capabilities to defend against threats and yet ensure autonomous capabilities do not become Just War-violating threats themselves. Just War violations might include disproportionate violence or disregard for non-combatant safety stemming from algorithms that misinterpret data. According to Aristotle and other philosophers discussed below, attempts to avoid violations of Just War principles in isolation from the real-world drivers for greater autonomy would in fact be unethical.[8] At the same time, assuming that ethical considerations are too abstract or rigid to inform artificial intelligence development would endanger the intrinsic humanity of the society that policy makers aim to protect by defaulting to machine-driven decision making. But are there ethical principles to assist concerned policy makers who are still dealing with limited information about the true capabilities of artificial intelligence?

In ancient Greece, Aristotle described prudence as a virtue that takes competing obligations head-on, which is an approach to ethical decision-making that considers what is possible given real-world pressures rather than simply what avoids moral harm.[9] A present-day philosopher, J. Patrick Dobel, says prudence involves “finding a concrete ‘shape’ to moral aspirations, responsibilities, and obligations.”[10] Retired U.S. Army Lieutenant General James Dubik has similarly argued that strategic dimensions of Just War theory call for practical prudence—using sound judgment, assessing the specific facts of situations requiring decisions, and picking the best action given real world constraints.[11]
Three Ways to Apply Aristotelian Thinking to the Development of Autonomous Capabilities

Drawing on Dobel’s book Public Integrity, three practical considerations offer guideposts to policy makers in applying prudence when balancing competing obligations: prudent leaders should prioritize ethical leadership capacity, research and development modalities, and engagement with society’s moral sensibilities as they oversee emerging autonomous capabilities.[12]

First, ethically-conscious policy makers will pursue technical training and input from diverse perspectives, thereby expanding their own capacity for ethical leadership. The reality is that revolutions in military technology often ride on the shoulders of risk-tolerant mavericks, and leaders will need at least a modicum of technical savvy to oversee them. For example, when Mason Patrick took command of the Army Air Service in the 1920s, he invested time to become a pilot himself and was then able to temper the influence of the brilliant but reckless Billy Mitchell, widely considered the “father of the U.S. Air Force.”[13] For national security leaders today, this aspect of prudence might look like investing time in accessible tech courses for non-technical personnel.

Similarly, prudent leaders will aggressively gather input from diverse operational, technical, and non-government perspectives, even when inconvenient. Dubik argues that war is too complex for a single mind to determine moral implications alone, and this is also true when attempting to overcome the unknowns and unverified assumptions of new, paradigm-bending technologies. Assuming computational tools will operate in noisy, real world environments the same way they have with clean data sets in controlled, development environments is problematic but perhaps too common.[14] Particularly with the potential for flash crash scenarios when algorithms are processing and acting on confusing data streams in real time, prudence requires pushing past convenient assumptions and soliciting dissenting opinions.

Second, ethical policy makers will employ what Dobel would call “modalities”—a combination of mindsets and methods—for prudent oversight. For oversight of artificial intelligence, two of his modalities are especially critical: aligning means with ends and following an iterative development process with application-specific ethical evaluations.[15] Aligning means to ends in the context of autonomous weapon capabilities involves weighing risks and rewards, philosophically extending the traditional Just War concepts of military necessity and proportionality to the interim development phase of new capabilities. 

Leaders can apply these modalities by evaluating specific AI weapons applications rather than categories of autonomous capabilities in the abstract. One engineer advocates clarifying at the outset of any discussion the specific cognitive process a potential artificial intelligence application will improve or automate—in other words, what is the intelligent function that humans normally perform and which the technology aims to replicate? Does it replace the humans that would otherwise observe and analyze data or actually those who choose the response, potentially a return-fire response? He offers some incisive follow-on questions:

What data is required and is clean data available to train the artificial intelligence application initially?

What sort of computational model will the artificial intelligence application use to make sense of data?

What will be missed by reducing the traditional role of humans who understand nuances and complexities in the process?[16]

Brian Michelson adds that attempting to achieve a perfect product before fielding may seem safe in the near term but the delays would likely heighten long term risks. He recommends testing and learning from each new addition of autonomous capability to ensure means align with ends.[17] For example, an aerial drone program might iteratively test sensing and adapting flight patterns to avoid bad weather and expected aerial traffic before navigating the trickier responses to unexpected and potentially hostile aerial vehicles.
An artist’s concept for autonomous air weapons (Air Force Times)

The third and final consideration for society’s moral sensibilities stands on the notion that policy makers bear responsibility for the moral health of their society. Prudent national security leaders will strive for transparent, public dialogue so that citizens can share responsibility for moral choices and cogently evaluate the military’s application and ultimate use of greater autonomy in weapon systems. As Dobel argues, “[t]rust of other citizens and trust in institutions are social resources and social capital that leaders and major institutions should work to create and sustain.”[18] Without these, “society’s capacity to act for common purpose declines.”[19]

Shouldering responsibility for social and institutional legitimacy, leaders can foster collaboration on critical issues such as methods to code safeguards, checks and post-action reviews of artificial intelligence-enabled autonomy in weapons. A recent Executive Order articulates guiding principles for artificial intelligence development that include: "foster public trust and confidence in [artificial intelligence] technologies," "drive technological breakthroughs in [artificial intelligence]," and "drive development of appropriate technical standards."[20] The policy notes smart minds in industry can help leaders solve critical technical challenges, which will include the capability to program comprehensive audits of code modifications, securing weapons against both hacking and unauthorized insider changes. As one defense industry analyst suggests, “if the U.S. and its democratic allies win the [artificial intelligence] race, the Defense Department will deserve credit because of the unique way it collaborates with the private sector” to resolve these and other thorny issues.

Conclusion

History warns against deferring attention to ethical issues until a national emergency pressures deployment of a new, little understood capability. As some ethicists have warned, “the moral implications of nuclear weapons were not publicly debated until after their first use, and many of the scientists who worked on the Manhattan Project later regretted ignoring those moral issues.”[22] Dobel’s ideas on prudence can help national security leaders avoid a similar mistake while balancing the obligation to maintain our competitive edge during the ambiguous phase of artificial intelligence development.

No comments: