5 April 2019

PROJECT ALPHAWAR: THE (FICTIONAL) STORY OF ARMY’S C2 AI PROGRAM

Paul Oh 

Strategists, policymakers, military officers, and futurists have all pointed to advances in artificial intelligence as having the potential to change the military and the conduct of warfare. Current efforts to integrate AI, however, have mainly been in peripheral areas like preventive maintenance, human resources, and imagery progressing. This fictional narrative envisions how AI may change the core of how the Army fights, namely in aiding the command and control of subordinate elements. AI’s ability to increase our ability to Observe, Orient, Decide, and Act has the potential to fundamentally alter how we conduct warfare, in ways presently unimagined.

You could easily spot Col. Jake Stone from across the Pentagon courtyard; he was the one not straight out of central casting. He was lanky, a bit frail looking, and sported a bowl-like haircut that would make any sergeant major cringe. Jake was the last officer you would put in front of troops to inspire them to battle. The softness of his voice would cause you to lean in, while his high pitch would at the same time make you back away. Yet he had an intellect that people usually underappreciated, mostly because they could not keep pace with his unorthodox ideas. Now on the last leg of his career, he had one final mission. He wanted to upend his beloved Army’s hallowed thoughts on war to save the nation from defeat in a future digital conflict.


Although most Westerners missed this feed from their finely calibrated news sources, Jake was among those who were shocked by Lee Sedol’s defeat. In 2016, this world-class, “nine-dan” Korean Go player lost 4-1 to DeepMind’s artificial intelligence program AlphaGo. The utter dismay of this brilliant player of the world’s most complex board game was an image that still disturbed Jake. Go was supposed to be a game not only of skill, but also of creativity and intuition. These were areas where humans supposedly had an unsurpassed advantage, but AlphaGo had proven that theory wrong. Rumor had it that this was when the Chinese decided that AI would be warfare’s gamechanger. They would focus their investment in this technology to offset American advantages.

The United States was following suit. But Jake saw that the massive military-industrial infrastructure optimized to build hardware couldn’t shift to a digital focus fast enough. The Army was struggling also. It wanted AI, but entrenched culture, bureaucracy, and processes were monumental obstacles. Jake thought the focus was off also. AI was being used to help preventive maintenance, for example. These were worthy efforts for sure, but AI was for more than keeping equipment battle ready. AI needed to be about the battle itself—how to fight and win using machine learning to think and act faster than your adversary. AI had the potential to revolutionize warfare at its core, not at the peripheries where the present focus resided.

The Army was moving too slowly, but Jake had a cure. His vision was simple: give the Army an AlphaGo experience. If an AI program could best American officers in commanding and controlling Army units in a simulated wargame, perhaps the institution would feel the same shock and dismay of the Korean Go champion. This would drive the revolutionary changes needed for AI’s integration into the Army’s core operating concepts. AI would change the way that the Army fights its formations. Commanders could use AI to observe faster, orient faster, decide faster, and act faster. Luckily for Jake, his present assignment provided the connections required to make his vision a reality.

After War College, Jake was assigned to the Algorithmic Warfare Cross-Functional Team, better known as Project Maven. Nested under the undersecretary of defense for intelligence, Project Maven was where the DoD’s nascent but frontier work on AI was being conducted. Here he befriended Col. Thom Marax. They were opposite in every aspect of their personalities; Thom was intense, blunt, and spoke about AI incorporation with an Old Testament prophet’s intensity. But they were completely aligned in their vision of what AI could bring to the battlefield. Thom knew the ins and outs of the Pentagon bureaucracy and had a wide network of AI professionals both inside and outside the government.

Thom did two monumental things for Jake. First, he secured funding to set up a special research project within Maven. This project was to explore how AI could contribute to the command and control of military formations in land warfare. Second, he introduced and helped hire Karl Kendy as a consultant. Karl was the lead researcher for AlphaGo and had been part of the team that humbled Lee and the human race in 2016. Thom lured Karl with a simple challenge. Go was a game of unsurpassed complexity, but it was still just a board game. Could Karl take the same deep neural network and advanced tree searchtechniques of AlphaGo and apply it to the most complex of human endeavors—war? Karl could not resist the challenge. So, Jake and Karl melded minds and devised a plan to train an AI to defeat human commanders in simulated wargames. Project AlphaWar was born.

Clausewitz wrote that everything in war is simple, but the simplest thing is difficult. Friction accounted for this difficulty, but it did not negate the fact that fighting could be reduced to simple tasks. Jake and Karl, along with their hired team of computer programmers and military officers, started with the Army’s Military Decision Making Process. Here was the ultimate example of decision making broken down to its most basic analytical parts. Karl saw how each step could be translated into an equation to provide a mathematical description of an input: the weather, terrain, or unit capabilities, for example. AlphaWar’s neural networks could take these inputs and process them through different network layers to produce possible outcomes, or courses of action, for military units.

After the basic program was written, Jake took the AlphaWar team to the Army’s Intelligence Center at Fort Huachuca. There they observed how Army instructors taught the step-by-step process of understanding the enemy, otherwise known as Intelligence Preparation of the Battlefield. For their final exam, each military intelligence officer was tested on how well he or she analyzed the inputs of terrain, weather, and enemy doctrine and capabilities to formulate enemy courses of action. Taking advantage of the center’s ongoing effort to digitize this process, the team used the available data to train AlphaWar to analyze these inputs like the students did. AlphaWar then produced the output of predicted enemy actions, each with an associated probability of its likelihood.

Then they headed to Fort Benning’s Maneuver Center of Excellence. Here the Army instructors taught the process of taking enemy actions (as an input this time) to formulate friendly plans. For their final exam, each infantry and armor officer was tested on how well he or she could ingest the information about the weather, terrain, and the enemy to produce friendly courses of action. Again, the team used this data to train AlphaWar. Using his algorithms, however, Karl pushed AlphaWar further. AlphaWar took the various possible enemy actions to produce probable friendly actions with estimated probabilities of success. The results were remarkable. AlphaWar learned how the Army structured its processes for decision making.

The next step for AlphaWar was to pit its assessments against a thinking adversary and fight against a human commander. Jake tapped into his contacts at the School of Advanced Military Studies (SAMS). SAMS prided itself on producing the Army’s “Jedi knights,” a select group of officers educated and trained to think, lead, and plan operations and campaigns. Here he found forward-thinking field-grade officers excited about the prospect of taking on an AI opponent. Maj. Ray Goldsworth was one such officer with experience in Iraq and Afghanistan as well as in training units at the Army’s premier National Training Center. He was the student selected to lead a staff of his fellow “SAMSTERS” against AlphaWar in a simulated wargame. Both teams were assigned humans on terminals to play subordinate commanders. The game was on.

As expected, the wargame initially did not proceed smoothly. There were hiccups in establishing reporting procedures, building the data architecture, and ensuring connectivity between different systems. But as the games progressed, everybody felt the movement into uncharted territory.

Jake reflected on two observations. First, AlphaWar’s computer programmers worked nonstop to make sure the data was correctly curated, tasks were suitably translated into equations, and the algorithms were constantly refined. AI was supposed to help humans make better decisions. But here, humans were helping the AI make better decisions. Second, the SAMSTERs were superbly trained in decision-point tactics. Ray had at the ready his Decision Support Template showing the battle decisions he needed to make. These “decision points” served as a method to focus his staff’s activities to ensure timely and effective decision making by the commander. AlphaWar needed no such mechanism. It made its decisions astonishingly fast as soon as it ingested the appropriate data. Its capacity to observe, understand the permutation of options, and control everything occurring on the battlefield was beautiful, and frightening, to observe.

After a few iterations, SAMS decided that the educational value for its students was minimal and discontinued the wargames. AlphaWar’s “rapid moves” made the game too fast, and the SAMSTERs could not practice their decision-making processes. Not only that, AlphaWar made some downright unexplainable decisions that left everyone puzzled. The inability of the team to explain AlphaWar’s decisions did not play well in the director’s After Action Reviews. Secretly, Karl had a hunch. He realized that AlphaWar was optimized for victory, but not for limiting its own casualties. Karl had witnessed this with AlphaGo. The AI didn’t care how many “soldiers” it lost as long as the outcome was assured.

News of this wargame caught the attention of the Army’s senior leaders. Sensing the importance of these developments, Gen. John Murray of the Army’s Futures Command offered to sponsor a wargame to be held on the premises of the Army War College, the site of the Army’s Strategic Wargame Program. The War College would have nine months to prepare a scenario that would pit the 1st Armored Division’s division staff against AlphaWar. Each side was to have two turns on offense and two on defense, with the final battle slated as a meeting engagement. Both sides would have human officers virtually commanding their respective subordinate units through computer terminals. Maj. Gen. John Kem, the college’s commandant, planned to bring together the college’s best minds to build this event. Ready or not, the Army was going to have its AlphaGo moment.

Nine months would not give Jake much time. Karl needed access to more data to continue training AlphaWar through reinforcement learning. The algorithms needed to be optimized not only for victory, but also to minimize casualties. The team still needed to figure out how AlphaWar could best communicate to its human subordinates. Meanwhile, there were bugs everywhere. There were doubters everywhere. There were prophets of the end of the human race everywhere. But Jake knew this was a take-it-or-leave-it moment. This wargame could fundamentally change how the Army thought about artificial intelligence. In what ways, it was anybody’s guess.

As Jake thought about this challenge, he feared that the Army would learn the wrong lessons. Fear and misunderstanding may cause it to see AI as something to be kept in a box. This would lead to a failure of capturing AlphaWar’s awesome potential. The Army may conversely put too much hope in AI and misdiagnose the importance of humans. It would forget that humans trained AlphaWar to do what humans wanted.

What did he want the Army to learn? He thought about the Go master’s comments when he lost to AlphaGo. Lee stated that playing an AI helped him better understand Go and opened his eyes to how the game could be played in ways never imagined. Jake Stone wanted this. He wanted AlphaWar to help his successors in uniform better understand war and see how it could be fought in ways never imagined. He wanted them to see it before America’s adversaries did.

LTC(P) Paul Oh is currently a student at the Army War College. He’s had multiple command, intelligence, and instructor positions during his twenty-two years of service. He is a ‘97 graduate of the United States Military Academy and holds master’s degrees from Princeton University and the School of Advanced Military Studies.

No comments: