9 December 2019

Bad Idea: Integrating Artificial Intelligence with Nuclear Command, Control, and Communications

Bryce Farabaugh

The plotline is a tired trope by now: machines wrest control of a nuclear arsenal from their human creators and either initiate apocalypse (à la Terminator’s Skynet) or avert disaster (à la WarGame’s War Operation Plan Response). The scenarios are potent and their relevance lasting because they serve as a parable for people’s hopes and fears about technology: in combining the terrible destructive potential of nuclear weapons with human suspicion about artificial intelligence (AI), we hope the outcome is less promethean and more deus ex machina. Although this anxiety has existed for decades, one would be hard pressed to have predicted the debate around AI and nuclear weapons would continue to this day.

Most recently, a piece published in War on the Rocks in August 2019, titled “America Needs a ‘Dead Hand’” resurrected the debate. The authors argued that due to advancements in emerging technologies like hypersonic weapons and nuclear cruise missiles, attack-time compression will put an unacceptable amount of stress on the ability of American leadership to adequately make decisions during a nuclear crisis. The article made such waves that even senior military leaders, including the director of the Pentagon’s Joint Artificial Intelligence Center, expressed their skepticism of the authors’ argument. Ultimately, while the authors’ diagnosis may be correct, their prescription is wrong: their proposal of a nuclear “dead hand” that integrates AI with nuclear command, control, and communications (NC3) to create an automated strategic response system underestimates AI’s potential to inadvertently precipitate a catastrophic mistake.

In order to understand AI’s possible impact on nuclear weapons policy, it’s important to have a solid foundation of what AI is, what it isn’t, and what is meant by NC3.


Researchers have defined AI as “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgement, and intention.” Some tasks, like image or facial recognition, are already at or beyond human accuracy because such tasks are unambiguous and (given enough time and computer processing power) can be deduced by machine learning.

NC3, likewise, is a simple acronym that describes a hugely complex system. Nuclear command and control (NC2) is the system by which the President, as commander-in-chief, is able to exercise the authority and direction over nuclear forces. The third “C” is for communications, or the survivable network of communications and warning systems meant to facilitate the aforementioned operations.

Understanding how AI could support NC3 is difficult to predict but important to consider for several reasons. First, AI is a nascent technology, and predicting a new technology’s impact and timeframe for widespread adoption is nearly impossible (as evidenced by the disappointing lack of flying cars and off-world colonies at the end of 2019). Second, both AI and NC3 are not singular applications or processes but rather complex collections of various systems. This means AI could provide utility in one narrow aspect of NC3 (e.g., early warning satellite data collection) but be completely useless in another (e.g., securing radio frequency systems). Finally, NC3 is in the process of major modernization efforts along with the rest of the American nuclear arsenal, and the nexus of rapid advancements in AI along with this push for widespread upgrades across the board means the costs and benefits of integration could be significant.

Although the conversation around AI and NC3 has been particularly active recently, the debate about the wisdom of integrating the two to some degree is much older, and both the United States and Russia have in the past experimented with aspects of command and control that could be considered “automated” to some degree.

The most widely-known experiment of this kind was the Soviet system known as “Perimeter.” Although popularly described as a “dead hand” automated system, Perimeter reportedly relied on a network of sensors to detect nuclear detonations, after which it would confirm whether communication lines with Soviet leadership were still active. If it was determined that leadership was compromised, the system didn’t actually have the capability to launch weapons itself but instead could immediately delegate launch authority to lower level officers who normally would not have such authorities.

The comparable system in the United States, called the Survivable Adaptive Planning Experiment (SAPE), was a late 1980s research project that explored using the most advanced AI of the day to target Soviet ICBM launchers. Similar to the Perimeter system, SAPE didn’t have direct launch authority, but would instead translate intelligence, surveillance, and reconnaissance (ISR) data into nuclear targeting plans that would then be transmitted to manned B-2 bombers.

One can imagine ways that AI could increase security, even in the nuclear policy space. For example, AI could be used to monitor large nuclear material facilities, defend against cyber intrusions, or even support decision-making under certain circumstances.

The difference between these best-case scenarios and what is being proposed by some today is clear, however: creating a true “dead hand” that automates the launch of nuclear weapons is dangerous because it removes humans from the decision-making process entirely.

One need only look at the recent past to see many nuclear near-misses, where humans used their judgement to critically evaluate information relayed to them by machines that, if they had followed protocol, would have resulted in nuclear war.

For example, on November 9, 1979, a missile warning system was inadvertently fed a test scenario showing a Soviet nuclear attack. Only by double-checking the warning against the U.S. Air Force’s Ballistic Missile Early-Warning system were officers able to discern the false alarm.

Another near-miss took place September 25, 1983, when a Soviet early warning satellite relayed a launch of five land-based missiles by the United States during a period of heightened tension between the two superpowers. The Soviet officer on duty had only minutes to determine whether to initiate a response and, despite having verified the satellite was operating properly, instead told his commanders the launch was a false alarm based partly on his intuition that the United States was unlikely to launch only five missiles in a surprise attack. Investigations later revealed the satellite mistook sunlight bouncing off the tops of clouds as a missile launch.

Separate from these examples of nuclear close calls, AI technologies are struggling to live up to expectations when performing even comparatively low-stake tasks. For example, self-driving car technology is improving, but it continues to run into roadblocks as developers struggle to address thousands of potential real-world scenarios like jaywalking pedestrians or inclement weather.

In another case, a test of Amazon’s facial recognition software mismatched 28 members of Congress against a database of 25,000 mugshots. Amazon responded by saying the tests were performed using the default confidence threshold of 80 percent rather than the suggested 95 percent threshold for law enforcement applications, but such a distinction raises a serious question: what confidence threshold would be acceptable for decisions involving nuclear weapons? 95 percent? 99 percent?

The combination of serious nuclear near-misses resulting from an imperfect NC3 system along with current AI shortcomings emphasizes how important judgement is when making decisions about nuclear weapons. AI is still in its infancy and removing the human factor from the decision-making process could provide even smaller opportunities for avoiding nuclear war. Indeed, history has shown that during numerous crises, humans have stepped back from the brink, critically evaluated data relayed by machines, and narrowly avoided nuclear catastrophe, which is why integrating AI with NC3 could prove to be the last bad idea we have.

No comments: