18 February 2019

No, the Pentagon Is Not Working on Killer Robots—Yet


The U.S. Department of Defense on Feb. 12 released its roadmap for artificial intelligence, and the most interesting thing about it might be what’s missing from the report: The military is nowhere close to building a lethal weapon capable of thinking and acting on its own.

As it turns out, the military applications of artificial intelligence today and in the foreseeable future are much more mundane. The Defense Department has several pilot projects in the works that focus on using AI to solve everyday problems such as floods, fires, and maintenance, said U.S. Air Force Lt. Gen. Jack Shanahan, who heads up the Pentagon’s new Joint Artificial Intelligence Center.

“We are nowhere close to the full autonomy question that most people seem to leap to a conclusion on when they think about DoD and AI,” Shanahan said during a briefing Tuesday.

It’s not that Department of Defense hasn’t given the idea of fully autonomous weapons much thought. In 2012, the Pentagon published an autonomy directive that sought to define what constitutes an autonomous weapon system and how it should be deployed. Theseguidelines state clearly that there should always be a human in the loop but leave open to interpretation the question of how much control the human will have over the weapon system.

In fact, many precision-guided missiles already operate with some degree of autonomy. These weapons, called “fire and forget,” need no human intervention after firing. A human operator programs targeting information prior to launch. As the missile gets closer, its onboard radar activates and guides it toward the target. Some advanced missiles are even re-targetable after launch.

But the kind of fully autonomous weapon seen in movies like Terminator and I, Robot—ones capable of human thoughts and decisions—is a long way off, experts say.

“Compare say a remotely piloted Reaper [drone] to an autonomous Reaper—it would be really hard to have an autonomous Reaper making a decision on whether to fire not,” said Michael Horowitz, a professor of political science and the associate director of Perry World House at the University of Pennsylvania. “Some people that think an AI [weapon] might never be able to make that choice.”

Building a “Skynet”—the fictional AI system in the Terminator franchise that becomes self-aware and tries to destroy humanity—would require “multiple breakthroughs in the speed and efficacy of algorithms” that most AI researchers agree are far in the future, Horowitz said.

The problem is the sheer amount of data needed to train an algorithm for any contingency the system might face, particularly on a complex battlefields, in a way the human operator could trust, he said.

“It’s really hard to do with the level of reliability that the United States demands when it comes to the use of military force,” Horowitz said. “It’s not that you can’t do it—and others with lower standards may make other choices—it’s that the very high standards we put on our weapons raises the bar.”

The Defense Department’s new AI strategy—rolled out one day after President Donald Trump launched the “American AI Initiative” designed to accelerate U.S. investment in AI—is for the most part a continuation of the previous administration’s policy, Horowitz said. Key tenets of the Pentagon’s document are accelerating the delivery and adoption of AI; evolving partnerships with industry, academia, allies, and partners; cultivating an AI workforce; and leading in military AI ethics and safety.

One thing the strategy does make clear, Horowitz said, is that advances in AI have military applications far beyond autonomous weapon systems.

“We get caught up in the killer robots and Skynet because those are the things we fear from the movies, but most applications of AI in the military are going to be pretty mundane,” he said.

Indeed, of the two pilot programs the Joint Artificial Intelligence Center has launched, one is focused on humanitarian and disaster assistance relief, specifically fighting fires, and the other on predictive maintenance, Shanahan said.

For the firefighting mission, the Pentagon is partnering with the Department of Homeland Security, the Federal Emergency Management Agency, and other agencies to identify fire lines, Shanahan said. The idea is to use AI algorithms to more efficiently comb through video and still images of infrastructure damage from recent fires. This data will then be transmitted to command-and-control centers and firefighters on the ground with handheld devices.

The hope is that the use of AI will significantly speed up the process of tracking wildfires, Shanahan said.

“You will see there are pictures of people on the back of pickup trucks with acetate plotting these things manually,” Shanahan said. By contrast, this application will “give them an idea of what that fire line might look like in minutes.”

The project was prompted by the recent devastating California wildfires, he added.

For the second pilot, the Defense Department is using AI to find trends in maintenance done on the H-60 helicopter, which is used by all of the military services, to better predict when repairs will be needed.

“This was a great example of all the services having this common problem. They all use this asset. Data was already coming off this asset, and we could quickly get the services to agree on the problem that we were trying to solve,” said Dana Deasy, the Pentagon’s chief information officer. “This was a great example of how the Joint Artificial Intelligence Center was used in the way we see it being used in the future.”

Separately, the Pentagon continues to work on Project Maven, which uses AI to analyze surveillance footage from drones flying over battlefields. It is currently deployed to at least five clandestine locations, including in the Middle East and Africa.

No comments: