22 February 2019

Deadly US Applications of Artificial Intelligence

Mary Wareham

In the United States and around the world, public concern is rising at the prospect of weapons systems that would select and attack targets without human intervention. © 2018 Campaign to Stop Killer Robots

As the United States Department of Defense releases a strategy on artificial intelligence (AI), questions loom about whether the US government intends to accelerate its investments in weapons systems that would select and engage targets without meaningful human control.

The strategy considers a range of potential, mostly benign uses of AI and makes the bold claim that AI can help “reduce the risk of civilian casualties” by enabling greater accuracy and precision. The strategy commits to consider how to handle hacking, bias, and “unexpected behavior” among other concerns.


Scientists have long warned about the potentially disastrous consequences that could arise when complex algorithms incorporated into fully autonomous weapons systems created and deployed by opposing forces meet in warfare. How would the use of such learning systems conform with current US policy, which commits to “to exercise appropriate levels of human judgment over the use of force”?

Fully autonomous weapons raise so many ethical, legal, technical, proliferation, and international security concerns that Human Rights Watch and other groups co-founded the Campaign to Stop Killer Robots in early 2013 to push for a preemptive ban. At the end of that year, governments launched diplomatic talks on what they called lethal autonomous weapons systems. Yet progress towards addressing concerns has been hampered by the refusal of the US, Russia, and a handful of other states to work towards a concrete outcome.

Despite diplomatic setbacks, it’s clear that the notion of removing human control from weapons systems and the use of force is becoming increasingly unpopular or stigmatized. The United Nations secretary-general is urging states to prohibit such weapons before they are introduced, calling them “morally repugnant and politically unacceptable.” A new poll of 26 countriesshows public opposition to fully autonomous weapons is rising rapidly and with it the expectation that governments will act to prevent the development of such weapons systems.

Yet the US remains steadfast in its refusal to consider any move to negotiate a new treaty to prevent the development of fully autonomous weapons. There are fears that an already meek 2012 Pentagon policy directive on autonomy in weapons systems could be overridden or replaced by a much weaker policy.

The AI strategy pledges that the Defense Department will only develop and use artificial intelligence in an “ethical” and “lawful” manner for “upholding and promoting our Nation’s values.” Yet such promises ring hollow at a time when the US continues to steadfastly resist efforts to regulate fully autonomous weapons through new legal measures. Without regulation in the form of a new treaty to ban fully autonomous weapons, policy pledges of self-restraint are unlikely to last long.

No comments: