14 July 2017

Info Ops Officer Offers Artificial Intelligence Roadmap

By CHRIS TELLEY

Artificial intelligence, machine learning and autonomy are central to the future of American war. In particular, the Pentagon wants to develop software that can absorb more information from more sources than a human can, analyze it and either advise the human how to respond or — in high-speed situations like cyber warfare and missile defense — act on its own with careful limits. Call it the War Algorithm, the holy grail of a single mathematical equation designed to give the US military near-perfect understanding of what is happening on the battlefield and help its human designers to react more quickly than our adversaries and thus win our wars. Our coverage of this issue attracted the attention of Capt. Chris Telley, an Army information operations officer studying at the Naval Postgraduate School. In this op-ed, he offers something of a roadmap for the Pentagon to follow as it pursues this highly complex and challenging goal. Read on! The Editor.

“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” Albert Einstein

Artificial intelligence is to be the crown jewel of the Defense Department’s much-discussed Third Offset, the US military’s effort to prepare for the next 20 years. Unfortunately, “joint collaborative human-machine battle networks” are off to a slow, even stumbling, start. Recognizing that today’s AI is different from the robots that have come before, the Pentagon must seize what may be just a fleeting opportunity to get ahead on the adoption curve. Adapting the military to the coming radical change requires some simultaneous baby steps to learn first and buy second while growing leaders who can wield the tools of the fourth industrial revolution.

First and foremost, the US must be willing to stomach the cost to build cutting-edge systems. AI functions wired into free or discounted Internet services work because the companies profit by selling user data; the Pentagon is probably not eligible for this discount. Also, some of our more stovepiped tactical networks may have difficulty providing the large numbers of training data points, up to 10,000,000 events, needed to teach a learning machine. Military AIs will go to school with crayons until we invest significant capital in open architecture data networks. Furthermore, the technicians needed to integrate military AI won’t be cheap either. According to data from Glassdoor, AI engineers earn a national average of 35 percent more than cybersecurity engineers, whom DoD is already jumping through hoops to recruit and those technical skills aren’t getting any less valuable.

“Last year AI went from research concept to engineering application,” one CEO said. Another thinks the next 10 years may mean the dawn of an Age of Artificial Intelligence. This isn’t just hype. In 2013 an Oxford study forecast that 47 percent of total US jobs were susceptible to computerization. Notably, white-collar workers are beginning to be replaced. It now seems that any job which involves routine manipulation of information on a computer is vulnerable to automation. J.P. Morgan is now using AI solutions to slice 360,000 man hours from loan reviews each week. This year, insurance claims workers began to be replaced by IBM’s Watson Explorer. The crux of our human failing is that an AI is capable of analyzing intuitive solutions out of millions of possible results and manipulating those answers far faster than we can. The fastest human gamers can click a keyboard or mouse at a rate of several hundred actions per minute; a computer can do tens of thousands.

Planners — DoD’s white-collar workers — will be replaced before riflemen. They are just as susceptible to automation as their civilian peers. Right now, synthesizing knowledge and producing a ‘creative and flexible array of means to accomplish assigned missions’ belongs to staff planners. These service members and defense civilians use basically the same tools — PowerPoint, Excel, etc. — as does a contemporary office worker. If a robot can buy stocks and turn a profit or satisfactorily answer 20,000,000 helpdesk queries, certainly it can understand the “tactical terms and control measure graphics that compose the language of tactics.” After all, field manuals and technique publications are just a voluminous trove of ‘and,’ ‘or,’ and ‘not’ logic gates that can be algorithmically diagrammed.

Enemy contact front? Envelop! Need to plan field logistics? Lay this template over semi-permissive terrain! If the product is an Excel workbook or a prefabricated PowerPoint slide, like intelligence preparation of the environment or battlefield calculus, an AI can probably do it better. The robots are coming for us all — even the lowly staff officer.

According to Pedro Domingo, author of ‘The Master Algorithm’, the best way to not lose your job to a robot is to automate it yourself. The key to effectively and efficiently on-boarding these technologies, as well as the multi-domain battles they will effect, is human capital. We need a bench of service members and government civilians who at least understand the lexicon and how to ask the right questions of the application interface. These leaders will provide “adoption capacity” for eventually fielding unilaterally developed defense systems that will form the core of the Third Offset. They help us fight on new, cognitive, attack surfaces; Microsoft’s @TayTweets chatbot was hacked, not with code, but by Internet trolls slyly teaching it bad behaviors. Just as the Navy trains officers to use celestial navigation while still fighting with GPS, DoD needs leaders who can spar in both the twentieth and twenty-first centuries to enable graceful system degradation.

Overall, AI will be in everything but will not be everything, so the Department must create a career path for these people without creating a career field. The machines will eventually write their own code so we need thinkers to operationalize automation rather than build software. Those skills can be acquired through intermixing funded massive open online courses, broadening seminars with academia, and training with industry tenures into standard professional timelines. The US is behind in computer science curriculum; if the DoD is to use AI to “lighten the cognitive load” by 2021, as the Army’s Robotic and Autonomous Systems Strategy demands, they, and the rest of DoD, will need to nurture and retain people with skills in robotics, computational math, and computational art. These programs need selection criteria and retention incentives to produce at least one AI literate leader for every battalion level command on that four-year timeline. This may seem fast, but leading AI experts expected a machine to beat humans at the game Go in 2027; it happened this year.

Since the AI market space is accelerating quickly, there are many possibilities for dual-use applications for the Defense Department. Though the military, most notably DARPA, has dabbled with AI in things like the cyber and self-driving car ‘grand challenges;’ fielding a variety of functional technologic solutions will provide proven ground before attempting unilateral projects.

There are many promising areas that would help defense planners get their toes in the water. The first is information operations. Predictive and programmatic marketing are incredibly lucrative algorithmically powered tools and they are already in use. Combined with AI systems for journalistic content creation, perhaps DoD can overcome a historically slow influence apparatus to beat state and non-state adversary propaganda. (Editor’s note: We are VERY uneasy with this idea — for moral and more provincial reasons.) Can Google Maps, or its competitors, tell us where traffic isn’t, compared to where it was yesterday as a blend of HUMINT/SIGINT to identify roadside bombs (IEDs)? Similar questions should be asked of emerging applications to compete with humans in the strategy game StarCraft, to help combined arms planning at the tactical level. The tools being built to examine cancer genomes could also be developed to model the cell mutations of extremist networks.

Small, short timeline endeavors like Project Maven, recently created to use machine learning for wading through intelligence data, must provide the network integration experience needed for building larger programs of record. Many small successes will certainly be needed to garner senior leader buy-in if decisive AI tools are to survive the “Valley of Death” between lab experiments and the transition to a program of record.

Fortunately, the AI market space is still coalescing. Unfortunately, it is an exponential technology so every success or failure is amplified by an order of magnitude. So far, Deputy Defense Secretary Bob Work wants $12 billion to $15 billion in 2017 for programs aimed at “human-machine collaboration and combat teaming” and has received 11 recommendations from the Defense Department’s Innovation Advisory board to get started. If even half of those dollars go to AI research then the DoD will have matched the venture capital spent last year on relevant startups. However, our adversaries will seek to gain advantage. China has already spent billions on AI research programs and they have state-owned investor companies, like ZGC Capitol, residing in Santa Clara, Calif.; their military leaders are aiming toward the leading edge of a military revolution of “intelligentization.” It’s also worth noting that many resources, like Google’s TensorFlow, are freely available online for whomever decides to use the technology.

So, the time is now for Artificial Intelligence; strategic surprise featuring things like ‘data driven behavior change’ or A.I. modulated denial of the electromagnetic spectrum will pose difficult challenges from which to recover. If we are to ride the disruptive wave of what some call the Great Restructuring, existing AI applications should be re-purposed before attempting defense-only machine learning systems. Also, developing a cadre of AI-savvy leaders is essential for rapid application integration, as well as for planning to handle graceful system degradation. The right AI investment, in understanding, strategy, and leaders, should be our starting block for a race that will surely reshape the character of war in ways we can only begin to imagine.

Capt. Chris Telley is an Army information operations officer assigned to the Naval Postgraduate School. He commanded in Afghanistan and served in Iraq as a United States Marine. He tweets at @chris_telley These are the opinions of the author and do not reflect the position of the Army or the United States Government.

No comments: