18 July 2017

Pentagon Studies Weapons That Can Read Users’ Mind

By SYDNEY J. FREEDBERG JR.

NEWSEUM: The troops of tomorrow may be able to pull the trigger using only their minds. As artificially intelligent drones, hacking, jamming, and missiles accelerate the pace of combat, some of the military’s leading scientists are studying how mere humans can keep up with the incredible speed of cyber warfare, missiles and other threats.

One option: Bypass crude physical controls — triggers, throttles, keyboards — and plug the computer directly into the human brain. In one DARPA experiment, a quadriplegic first controlled an artificial limb and then flew a flight simulator. Future systems might monitor the users’ nervous system and compensate for stress, fatigue, or injury. Is this the path to what the Pentagon calls human-machine teaming?

This is an unnerving scenario for those humans, like Stephen Hawking, who mistrust artificial intelligence. If your nightmare scenario is robots getting out of control, “let’s teach them to read our minds!” is probably not your preferred solution. It sounds more like the beginning of a movie where cyborg Arnold Schwarzenegger goes back in time to kill someone.

But the Pentagon officials who talked up this research yesterday at Defense One’s annual tech conference emphasized the objective was to improve human control over artificial intelligence. Teaching AI to monitor its user’s level of stress, exhaustion, distraction, and so on helps the machine adapt itself to better serve the human — instead of the other way around. Teaching AI to instantly detect its user’s intention to give a command, instead of requiring a relatively laborious push of a button, helps the human keep control — instead of having to let the AI off the leash because no human can keep up with it.

Official Defense Department policy, as then-Secretary Ash Carter put it, is that the US will “never” allow an artificial intelligence to decide for itself whether or not to kill a human being. However, no less a figure than the Carter’s undersecretary of acquisition and technology, Frank Kendall, fretted publicly that making our robots wait for human permission would slow them down so much that enemy AI without such constraints would beat us. Vice-Chairman of the Joint Chiefs, Gen. Paul Selva, calls this the “Terminator Conundrum.” Neuroscience suggests a way out of this dilemma: Instead of slowing the AIs down, make the humans’ orders come faster.

DARPA’s Revolutionizing Prosthetics program is devising new kinds of artificial limbs — and new ways to control them.

Accelerate Humanity

“We will continue to have humans on the loop, we will have human input in decisions, but the way we go about that is going to have to shift, just to cope with the speed and the capabilities that autonomous systems bring,” said Dr. James Christensen, portfolio manager at the Air Force Research Laboratory‘s 711th Human Performance Wing. “The decision cycle with these systems is going to be so fast that they have to be sensitive to and responsive to the state of the individual (operator’s) intent, as much as overt actions and control inputs that human’s providing.”

In other words, instead of the weapon system responding to the human operator physically touching a control, have it respond to the human’s brain cells forming the intention to use a control. “When you start to have a direct neural interface of this type, you don’t necessarily need to command and control the aircraft using the stick,” said Justin Sanchez, director of DARPA‘s Biological Technologies Office. “You could potentially re-map your neural signatures onto the different control surfaces” — the tail, the flaps — “or maybe any other part of the aircraft” — say landing gear or weapons. “That part hasn’t really been explored in a huge amount of depth yet.”

Reading minds, even in this limited fashion, will require deep understanding and close monitoring of the brain, where thoughts take measurable form as electrical impulses running from neuron to neuron. “Can we develop precise neurotechnologies that can go to those circuits in the brain or the peripheral nervous system in real time?” Sanchez asked aloud. “Do we have computational systems that allow us to understand what the changes in those signals (mean)? And can we give meaningful feedback, either to the person or to the machine to help them to do their job better?”

DARPA’s Revolutionizing Prosthetics program hooked up the brain of a quadriplegic — someone who could move neither arms nor legs — to a prosthetic arm, allowing the patient to control it directly with their thoughts. Then, “they said, ‘I’d like to try to fly an airplane,'” Sanchez recounted. “So we created a virtual flight simulator for this person, allowed this neural interface to interface with the virtual aircraft, and that person flew.”

“That was a real wake-up call for everybody involved,” Sanchez said. “We didn’t initially think you could do that.”

Tony Stark (Robert Downey Jr.) relies on the JARVIS artificial intelligence — exquisitely adapted to his personal strengths and weaknesses — to help pilot his Iron Man suit. (Marvel Comics/Paramount Pictures)


Adapting To The Human

Applying direct neural control to real aircraft — or tanks, or ships, or network cybersecurity systems — will require a fundamental change in design philosophy. Today, said Christensen, we give the pilots tremendous information on the aircraft, then expect them to adapt to it. In the future, we could give the aircraft tremendous information on its pilots, then have it use artificial intelligence to adapt itself to them. The AI could customize the displays, the controls, even the mix of tasks it took on versus those it left up to the humans — all exquisitely tailored not just to the preferences of the individual operator but to his or her current mental and physical state.

When we build planes today, “they’re incredible sensor platforms that collect data on the world around them and on the aircraft systems themselves, (but) at this point, very little data is being collected on the pilot,” Christensen said. “The opportunity there with the technology we’re trying to build now is to provide a continuous monitoring and assessment capability so that the aircraft knows the state of that individual. Are they awake, alert, conscious, fully capable of performing their role as part of this man-machine system? Are there things that the aircraft then can do? Can it unload gees (i.e. reduce g-forces)? Can it reduce the strain on the pilot?”

Air Force MQ-9 Reaper operators in training. Today’s drones require extensive human oversight and are hardly user friendly.

“This kind of ability to sense and understand to the state and the capabilities of the human is absolutely critical to the successful employment of highly automated systems,” Christensen said. “The way all of our systems are architected right now, they’re fixed, they’re predictable, they’re deterministic” — that is, any given input always produces the exact same output.

Predictability has its advantages: “We can train to that, they behave in very consistent ways, it’s easier to test and evaluate,” Christensen said. “What we lose in that, though is the real power of highly automated systems, autonomous systems, as learning systems, of being able to adapt themselves. ”

“That adaptation, though, it creates unpredictability,” he continued. “So the human has to adapt alongside the system, and in order to do that, there has to be some mutual awareness, right, so the human has to understand what is the system doing, what is it trying to do, why is that happening; and vice versa, the system has to has some understanding of the human’s intent and also their state and capabilities.”

This kind of synergy between human and artificial intelligence is what some theorists refer to as the “centaur,” after the mythical creature that combined human and horse — with the human part firmly in control, but benefiting from the beast’s strength and speed. The centaur concept, in turn, lies at the heart of the Pentagon’s ideas of human-machine teaming and what’s known as the Third Offset, which seeks to counter (offset) adversaries’ advancing technology with revolutionary uses of AI.

The neuroscience here is in its infancy. But it holds the promise of a happy medium between hamstringing our robots with too-close control or letting them run rampant.

No comments: