6 May 2016

Jammers, Not Terminators: DARPA & The Future Of Robotics

May 02, 2016

An “Intrepid Tiger II” jamming pod on an F-18 Hornet. DARPA wants to put artificial intelligence in such pods to come up with countermeasures “in real time.”

WASHINGTON: Robophobes, relax. The robot revolution is not imminent. Machine brainshave a lot to learn about the messy physical world, said DARPA director Arati Prabhakar. Instead, DARPA sees some of the most promising applications for artificial intelligence in theintangible realm of radio waves. That includes electronic warfare — jamming and spoofing — as well as a newly launched “grand challenge” on spectrum management: allocating and reallocating frequencies among users according to demand more nimbly than a human mind could manage, let alone the federal bureaucracy. In short, don’t think Terminators: thinkjammers.

DARPA director Arati Prabhakar

“Where it works well, we’re finding amazing new applications for artificial intelligence,” Prabhakar told an Atlantic Council conference this morning, extolling DARPA’s new unmanned ship, the Sea Hunter. “But we also see a technology that is still quite fundamentally limited.”

For example, feeding enough test images into a “machine learning” algorithm can teach it to recognize real objects remarkably well. Prabhakar cited a system that correctly identified an orange-vested construction worker on the side of the road — an important thing for, say, a driverless car to be able to recognize and avoid.

“The problem is that when they’re wrong they’re wrong in ways that no human would ever be.” she said, showing her next slide: a fat-cheeked baby clutching a toothbrush, which the AI had labeled “a young boy is holding a baseball bat.

An example of the shortcomings of artificial intelligence when it comes to image recognition. (Andrej Karpathy, Li Fei-Fei, Stanford University)

“This is a critically important caution about where and how we use this generation of artificial intelligence,” Prabhakar said.
She means it. Yes, Prabhakar’s example is cute and funny. But distinctly less adorable is the prospect that a future armed robot would classify a civilian holding a broom as an enemy combatant holding a rifle and open fire. (It’s the kind of mistake that even experienced human soldiers have made). The potential for such computer errors is why the US military envisions a human being always making the decision to shoot or not, though some other nations may not be so worried about collateral damage.

Artificial Intelligence & the Radio Spectrum

Mistaking a toothbrush for a baseball bat seems dumb of the computer, but remember humans have about a 530 million year head start on visual perception, which makes humans better soldiers, drivers, pilots, and sailors than any machine (for now). By comparison, humans only done complex math for a few thousand years at most, and computers already far excel us. Humans have zero natural ability to perceive radio waves, which means artificial intelligence isn’t playing catch-up to us in electronic warfare and frequency management the way it is in visual object recognition and navigation.

We’ve written before on DARPA’s initiatives in “cognitive electronic warfare.” The idea there is that each combat aircraft carries enough computer power to isolate an unfamiliar radio-frequency transmission — say, the seeker radar from an incoming missile, or the radar-blinding beam from an enemy jammer — and devise a counter-signal to neutralize it “in real time,” as Prabhakar puts it.

Currently, she noted, “it can be weeks to months to literally years” before a countermeasure is developed and installed across the air fleet. Many aircraft don’t have the equipment to identify a new enemy signal, and those that can must still pass the data to human analysts on the ground to figure out a counter, then program that into every aircraft.

A week ago, DARPA launched another radio-frequency initiative, one of its celebrated “grand challenges” open to all comers and offering a cash prize of $2 million. (Previous grand challenges laid the groundwork for today’s work on self-driving cars). To get that prize, teams must survive three years of winnowing, culminating in a 2020 contest in a massive (and yet to be constructed) wireless testbed called the “Colosseum.”

This Spectrum Collaboration Challenge (SC2 for short) will address the technically and politically painful task of divvying up the available radio frequencies among the thousands of bandwidth-hungry users, from the military and emergency services to cellphones. Those consumer wireless devices drive what Prabhakar calls “exploding demand.” The Defense Department is particularly anxious because Congress has repeatedly required it to give up formerly military frequencies for more lucrative civilian use, even as the military becomes more dependent on radar, wireless networks, and other radio frequency devices.


Today, we trust our devices to switch from one cell tower or local wireless network to another, and we let them shift frequencies slightly to get around interference: Cellphones do both these things all the time without their users ever noticing. But each class of devices has its own allocated slice of the spectrum it may not move beyond. Those allocations are set by the Commerce Department’s National Telecommunications and Information Administration (NTIA) after much debate, deliberation, and sometimes legislation — a ponderous and politicized process.

DARPA wants to give devices much more room to roam across the spectrum. Essentially, the Spectrum Collaboration Challenge is about teaching artificial intelligences to work together to assign frequencies on the fly, in fractions of a second, so that networks can get more spectrum when they need it and then relinquish it when they’re not transmitting as much. “The team that shares most intelligently is going to win,” DAPA program manager Paul Tilghman has said.

Radio frequency allocation and electronic warfare are unglamorous, not to mention invisible. They’re not the shiny Terminators or Predator drones most people worry about when they fear a robotic future. But an artificial intelligence that can help its human masters dominate the electromagnetic spectrum might have far more impact on future warfare than any robot that strikes a physical blow.

A laser experiment at the Air Force Research Laboratory

An American Advantage?

All this innovation is exciting, albeit sometimes unnerving. But what’s unnerving for the Pentagon is different from what’s unnerving for the public. The US military isn’t worried about the emergence of artificially intelligent war robots: It’s worried someone else may get them before we do. Unlike the Cold War, when the US consistently led the technological race over its Soviet opponent — with notable exceptions like rocket engines — the information age has much lower barriers to entry and many more potential competitors moving much faster.

“The US is still in a very, very strong position in R&D; what’s different is we’re no longer alone,” Prabhakar said. “Every growing nation has significantly expanded its R&D spending.”

“We’re coming through a period where we had such an advantage over some of the other players around the world,” Prabhakar told me after her Atlantic Council talk. ” “We’re going to find out now what happens when everyone else gets to play.”

DARPA can’t single-handedly tilt the military technology playing field back in America’s favor, she emphasized: “DARPA’s important, but we’re a very small part, we’re about lighting the spark.”

So in an era when algorithms matter more than machine tools, when the most important innovations are also the most portable across borders, and new technology spreads around the world faster than ever before, I asked, what is America’s enduring advantage?

“I’m actually optimistic,” Prabhakar told me. “It’s the ability to innovate, to have a big ambition, to let people take risks — the fact that it’s okay to fail in America, and pick up and try again.”

No comments: