9 January 2019

Pentagon Seeks a List of Ethical Principles for Using AI in War

BY PATRICK TUCKER

U.S. defense officials have asked the Defense Innovation Board for a set of ethical principles in the use of artificial intelligence in warfare. The principles are intended to guide a military whose interest in AI is accelerating — witness the new Joint Artificial Intelligence Center — and to reassure potential partners in Silicon Valley about how their AI products will be used.

Today, the primary document laying out what the military can and can’t do with AI is a 2012 doctrine that says a human being must have veto power over any action an autonomous system might take in combat. It’s brief, just four pages, and doesn’t touch on any of the uses of AI for decision support, predictive analytics, etc. where players like Google, Microsoft, Amazon, and others are making fast strides in commercial environments.


“AI scientists have expressed concern about how DoD intends to use artificial intelligence. While the DoD has a policy on the role of autonomy in weapons, it currently lacks a broader policy on how it will use artificial intelligence across the broad range of military missions,” said Paul Scharre, the author of Army of None: Autonomous Weapons and the Future of War.

Josh Marcuse, executive director of the Defense Innovation Board, said crafting these principles will help the department “safely and responsibly” employ new technologies. “I think it’s important when dealing with a field that’s emergent to think through all the ramifications,” he said.

The Board, a group of Silicon Valley corporate and thought leaders chaired by former Google and Alphabet chairman Eric Schmidt, will make the list public at its June meeting. Defense Department leaders will take them under consideration.

Marcuse believes that the Pentagon can be a leader not just in employing AI but in establishing guidelines for safe use — just as the military pioneered safety standards for aviation. “The Department of Defense should lead in this area as we have with other technologies in the past. I want to make sure the department is not just leading in developing AI for military purposes but also in developing ethics to use AI in military purposes,” he says.

The effort, in part, is a response to what happened with the military’s Project Maven, the Pentagon’s flagship AI project with Google as its partner. The effort applied artificial intelligence to the vast store of video and and image footage that the Defense Department gathers to guide airstrikes. Defense officials emphasized repeatedly that the AI was intended only to cut down the workload of human analysts. But they also acknowledged that the ultimate goal was to help the military do what it does better, which sometimes means finding and killing humans. An employee revolt ensued at Google. Employees resigned en masse and the company said that they wouldn’t renew the contract.

Scharre, who leads the Technology and National Security Program at the Center for a New American Security, said, “One of the challenges for things like Project Maven, which uses AItechnology to process drone video feeds, is that some scientists expressed concern about where the technology may be heading. A public set of AI principles will help clarify DoD’s intentions regarding artificial intelligence.”

The Maven episode represents a rare role reversal for a contractor and the Pentagon, with the Defense Department being more open—or at least consistent—in their messaging than the contractor they were paying. Defense officials hope that having a list of principles will allow companies that want to work with the Pentagon do so without having give mixed messages to public and their employees.

We have to have questions about how these tools are used appropriately, how data is used, how data is labeled, how to ensure the appropriate degrees of autonomy.

JOSH MARCUSE, EXECUTIVE DIRECTOR OF THE DEFENSE INNOVATION BOARD

Marcuse says a published list of ethical guidelines for the Department to follow will allay some of the suspicions many in Silicon Valley have about the military’s use of AI. “If we show leadership, responsibility, show that we’re circumspect and cautious where we need to be, and rigorous in our testing and making smart tradeoff decisions, I think that will address a lot of the concerns that partners and potential partners have raised,” he said.

As social media companies and large tech players like Amazon have developed and used new AI offerings, and offered them for sale to law enforcement agencies, the military, or just to maximize their own profits, ethicists, academics, and observers have raised serious questions about how big corporate players are developing what many consider to be the most important technology of the 21st century.

A common concern is data bias, or the occurrence of algorithmic decisions informed by large—but not necessarily accurate—data sets. Consider a 2015 controversy that erupted when it was revealed that Google’s search algorithm was mislabeling black people as gorillas. The private sector hasn’t worked out the answers to these questions even as it’s rushed to commercialize AI. The U.S. military, says Marcuse, doesn’t have the luxury of deploying new tech solutions and rushing them into the field without a good understanding of how they work. A massive data-labeling error for the military isn’t just a public relations disaster, it’s a potential killer, especially combined with highly autonomous robotic weapons. The concerns that ethicists and academics have raised are good ones, he says.

“We have to have questions about how these tools are used appropriately, how data is used, how data is labeled, how to ensure the appropriate degrees of autonomy,” he says.

The form that the recommendations take won’t be a strict list of commandments so much a range of recommendations informed by scenario-based methodology: what’s the most appropriate use of AI in one battle situation versus another? Those scenarios are informed by interviews with actual commanders, said Marcuse. “We actually ask commanders what their questions are…The scenarios are designed to test those questions.”

Of course, Defense Department leaders will decide on their own, and behind closed doors, how to put the principles into practice. Marcuse says it’s important for them to be able to do that away from the public eye. But, he says, the demand signal for the principles list came from within the department — in particular, CIO Dana Deasy. And It had the “full and enthusiastic” support of former Defense Secretary James Mattis, said Marcuse.

And they aren’t intended for a single user or audience within the military. Once they’re published, the hope is that combatant commanders, service chiefs, all sorts of different players will find them useful in drafting future strategies, concepts of operation, training guides, etc. “If you are the marine corps or army, and you are trying to come up with training manuals and conops, there is a lot of planning, thinking and training. We think the principles would be informing those things,” he says

The process kicks off on Jan. 23 with a public listening session at the Harvard Belfer Center, an effort to court exactly the sort of academics and ethicists that have expressed the most skepticism of the Defense Department’s AI ambitions.

“It is critical that the forthcoming [Defense Innovation Board] recommendation to DoD is consistent with an ethics-first approach and upholds existing legal norms around warfare and human rights, while continuing to carry out the Department’s enduring mission to keep the peace and deter war. DIB members will collect and review all comments for consideration as they draft these AI principles in the coming months,” according to an announcement.

No comments: