30 April 2017

How Should We Treat Our Military Robots?

BY AUGUST COLE

Increasingly human-like automated weapons demand an honest accounting of our emotional responses to them. 

The audience of venture capitalists, engineers and other tech-sector denizens chuckled as they watched a video clip of an engineer using a hockey stick to shove a box away from the Atlas robot that was trying to pick it up. Each time the humanoid robot lumbered forward, its objective moved out of reach. From my vantage point at the back of the room, the audience’s reaction to the situation began to sound uneasy, as if the engineer’s actions and their invention’s response had crossed some imaginary line.

If these tech mavens aren’t sure how to respond to increasingly life-like robots and artificial intelligence systems, I wondered, what are we in the defense community missing?

This is a pressing question. Military autonomous capabilities are no longer an abstract area of promise – and peril. “The idea of artificial intelligence and computing becoming almost human is very much what we’re working on today with some of our technologies,” said AMD President and CEO Lisa Su in a WIRED video interview with legendary film director Ridley Scott, whose Alien saga and Blade Runner films shaped how many of us visualize lifelike robots. 

You don’t have to be a dystopic-minded science fiction writer to realize that the next few years will see military and government officials make long-reaching decisions regarding the use and regulation of AI and machine learning, robotics, and autonomous cyber and kinetic weapons. These choices will alter the course of global affairs in the 21st century, and even shape the conflicts in which we’re engaged today. “Almost nowhere do I see a technology that’s current that offers as much as autonomy,” Will Roper, the head of the Defense Department’s Strategic Capabilities Office, said in a recent interview. “We’re working very hard to produce a learning system.” 

As that day nears, it is worth considering our current limitations in communicating with the neural-network machines that are up-ending our sense of normal. The promise of these technologies overshadows their inability to “explain” their decision-making logic to their creators. “This is our first example of alien intelligence,” Stephen Wolfram, CEO of Wolfram Research and a pioneer in machine learning, told an audience at March’s Xconomy Robo Madness conference in Cambridge, Massachusetts.

Warfare will be no different. While the U.S. military insists that it will have a human in or on the decision-making “loop,” the commercial world is aggressively pushing past that threshold. Many transactions will soon simply be AI to AI, Wolfram said, which will lead to fundamental changes in notions of contract law and enforcement.

Are these developments an invitation to reconsider the rules and norms of warfare as well? Is it too soon to start thinking about Computational Laws of Armed Conflict? That is a sensible, if far-reaching step, that runs right into a fundamental question that still has to be answered. “What do we want the AIs to do?” asked Wolfram. And how will the AIs know what we really want? There is the discomforting thought that we might invent a capability that quickly moves beyond our control, a gnawing awareness akin to the way Manhattan Project scientists must have stared over the precipice.

There are no easy answers to these questions. The defense community will have to make more room for constructive argument over the rules and norms that should govern autonomous machine conflict. It will get harder and more unpopular – yet increasingly important – to remain focused on the risks and strategic uncertainty introduced by technologies that are predicated on ideals of perfection. As National Security Advisor Lt. Gen. H.R. McMaster wrote in in 2014 about the four fallacies of technology in warfare, “These fallacies persist, in large measure, because they define war as one might like it to be rather than as an uncertain and complex human competition usually aimed at achieving a political outcome.”

Whether AI and robotics with increasingly human-like qualities further perpetuate these myths of modern warfare will depend as much on scientific knowledge as it will on an honest accounting of our emotional responses to them.

No comments: