16 February 2020

Deterrence in the Age of Thinking Machines

by Yuna Huh Wong

What are the implications of adding thinking machines and autonomous systems to the practices that countries have developed to signal one another about the use of force and its potential consequences?

What happens to deterrence and escalation when decisions can be made at machine speeds and are carried out by forces that do not risk human lives of the using state or actor?

How might the rise of these capabilities weaken or strengthen deterrence?

What are potential areas of miscalculation and unintended consequences?

The greater use of artificial intelligence (AI) and autonomous systems by the militaries of the world has the potential to affect deterrence strategies and escalation dynamics in crises and conflicts. Up until now, deterrence has involved humans trying to dissuade other humans from taking particular courses of action. What happens when the thinking and decision processes involved are no longer purely human? How might dynamics change when decisions and actions can be taken at machine speeds? How might AI and autonomy affect the ways that countries have developed to signal one another about the potential use of force? What are potential areas for miscalculation and unintended consequences, and unwanted escalation in particular?


This exploratory report provides an initial examination of how AI and autonomous systems could affect deterrence and escalation in conventional crises and conflicts. Findings suggest that the machine decisionmaking can result in inadvertent escalation or altered deterrence dynamics, due to the speed of machine decisionmaking, the ways in which it differs from human understanding, the willingness of many countries to use autonomous systems, our relative inexperience with them, and continued developments of these capabilities. Current planning and development efforts have not kept pace with how to handle the potentially destabilizing or escalatory issues associated with these new technologies, and it is essential that planners and decisionmakers begin to think about these issues before fielded systems are engaged in conflict.

Key Findings

Insights from a wargame involving AI and autonomous systems

Manned systems may be better for deterrence than unmanned ones.

Replacing manned systems with unmanned ones may not be seen as a reduced security commitment.

Players put their systems on different autonomous settings to signal resolve and commitment during the conflict.

The speed of autonomous systems did lead to inadvertent escalation in the wargame.

Implications for deterrence

Autonomous and unmanned systems could affect extended deterrence and our ability to assure our allies of U.S. commitment.

Widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.

Different mixes of human and artificial agents could affect the escalatory dynamics between two sides.

Machines will likely be worse at understanding the human signaling involved deterrence, especially deescalation.

Whereas traditional deterrence has largely been about humans attempting to understand other humans, deterrence in this new age involves understanding along a number of additional pathways.

Past cases of inadvertent engagement of friendly or civilian targets by autonomous systems may offer insights about the technical accidents or failures involving more-advanced systems.

Recommendations

Conduct further work on deterrence theory and other frameworks to explicitly consider the potential effects of AI and autonomous systems.

Evaluate the escalatory potential of new systems.

Evaluate the escalatory potential of new operating concepts.

Wargame additional scenarios at the operational and strategic levels.

No comments: