28 April 2020

ARTIFICIAL INTELLIGENCE IS THE FUTURE OF WARFARE (JUST NOT IN THE WAY YOU THINK)

Paul Maxwell 
Source Link

Artificial intelligence is among the many hot technologies that promise to change the face of warfare for years to come. Articles abound that describe its possibilities and warn those who fall behind in the AI race. The Department of Defense has duly created the Joint Artificial Intelligence Center in the hopes of winning the AI battle. Visions exist of AI enabling autonomous systems to conduct missions, achieving sensor fusion, automating tasks, and making better, quicker decisions than humans. AI is improving rapidly and some day in the future those goals may be achieved. In the meantime, AI’s impact will be in the more mundane, dull, and monotonous tasks performed by our military in uncontested environments.

Artificial intelligence is a rapidly developing capability. Extensive research by academia and industry is resulting in shorter training time for systems and increasingly better results. AI is effective at certain tasks, such as image recognition, recommendation systems, and language translation. Many systems designed for these tasks are fielded today and producing very good results. In other areas, AI is very short of human-level achievement. Some of these areas include working with scenarios not seen previously by the AI; understanding the context of text (understanding sarcasm, for example) and objects; and multi-tasking (i.e., being able to solve problems of multiple type). Most AI systems today are trained to do one task, and to do so only under very specific circumstances. Unlike humans, they do not adapt well to new environments and new tasks.


Artificial-intelligence models are improving daily and have shown their value in many applications. The performance of these systems can make them very useful for tasks such as identifying a T-90 main battle tank in a satellite image, identifying high-value targets in a crowd using facial recognition, translating text for open-source intelligence, and text generation for use in information operations. The application areas where AI has been most successful are those where there are large quantities of labelled data, like Imagenet, Google Translate, and text generation. AI is also very capable in areas like recommendation systems, anomaly detection, prediction systems, and competitive games. An AI system in these domains could assist the military with fraud detection in its contracting services, predicting when weapons systems will fail due to maintenance issues, or developing winning strategies in conflict simulations. All of these applications and more can be force multipliers in day-to-day operations and in the next conflict.

AI’s Shortfalls for Military Applications

As the military looks to incorporate AI’s success in these tasks into its systems, some challenges must be acknowledged. The first is that developers need access to data. Many AI systems are trained using data that has been labeled by some expert system (e.g., labeling scenes that include an air defense battery), usually a human. Large datasets are often labeled by companies who employ manual methods. Obtaining this data and sharing it is a challenge, especially for an organization that prefers to classify data and restrict access to it. An example military dataset may be one with images produced by thermal-imaging systems and labeled by experts to describe the weapon systems found in the image, if any. Without sharing this with preprocessors and developers, an AI that uses that set effectively cannot be created. AI systems are also vulnerable to becoming very large (and thus slow), and consequently susceptible to “dimensionality issues.” For example, training a system to recognize images of every possible weapon system in existence would involve thousands of categories. Such systems will require an enormous amount of computing power and lots of dedicated time on those resources. And because we are training a model, the best model requires an infinite amount of these images to be completely accurate. That is something we cannot achieve. Furthermore, as we train these AI systems, we often attempt to force them to follow “human” rules such as the rules of grammar. However, humans often ignore these rules, which makes developing successful AI systems for things like sentiment analysis and speech recognition challenging. Finally, AI systems can work well in uncontested, controlled domains. However, research is demonstrating that under adversarial conditions, AI systems can easily be fooled, resulting in errors. Certainly, many DoD AI applications will operate in contested spaces, like the cyber domain, and thus, we should be wary of their results.

Ignoring the enemy’s efforts to defeat the AI systems that we may employ, there are limitations to these seemingly super-human models. An AI’s image-processing capability is not very robust when given images that are different from its training set—for example, images where lighting conditions are poor, that are at an obtuse angle, or that are partially obscured. Unless these types of images were in the training set, the model may struggle (or fail) to accurately identify the content. Chat bots that might aid our information-operations missions are limited to hundreds of words and thus cannot completely replace a human who can write pages at a time. Prediction systems, such as IBM’s Watson weather-prediction tool, struggle with dimensionality issues and the availability of input data due to the complexity of the systems they are trying to model. Research may solve some of these problems but few of them will be solved as quickly as predicted or desired.

Another simple weakness with AI systems is their inability to multi-task. A human is capable of identifying an enemy vehicle, deciding a weapon system to employ against it, predicting its path, and then engaging the target. This fairly simple set of tasks is currently impossible for an AI system to accomplish. At best, a combination of AIs could be constructed where individual tasks are given to separate models. That type of solution, even if feasible, would entail a huge cost in sensing and computing power not to mention the training and testing of the system. Many AI systems are not even capable of transferring their learning within the same domain. For example, a system trained to identify a T-90 tank would most likely be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks are image recognition. Many researchers are working to enable systems to transfer their learning, but such systems are years away from production.

Artificial-intelligence systems are also very poor at understanding inputs and context within the inputs. AI recognition systems don’t understand what the image is, they simply learn textures and gradients of the image’s pixels. Given scenes with those same gradients, AIs readily identify portions of the picture incorrectly. This lack of understanding can result in misclassifications that humans would not make, such as identifying a boat on a lake as a BMP.

This leads to another weakness of these systems—the inability to explain how they made their decisions. Most of what occurs inside an AI system is a black box and there is very little that a human can do to understand how the system makes its decisions. This is a critical problem for high-risk systems such as those that make engagement decisions or whose output may be used in critical decision-making processes. The ability to audit a system and learn why it made a mistake is legally and morally important. Additionally, issues on how we assess liability in cases where AI is involved are open research concerns. There have been many examples in the news recently of AI systems making poor decisions based on hidden biases in areas such as loan approvals and parole determinations. Unfortunately, work on explainable AI is many years from bearing fruit.

AI systems also struggle to distinguish between correlation and causation. The infamous example often used to illustrate the difference is the correlation between drowning deaths and ice cream sales. An AI system fed with statistics about these two items would not know that the two patterns only correlate because both are a function of warmer weather and might conclude that to prevent drowning deaths we should restrict ice cream sales. This type of problem could manifest itself in a military fraud prevention system that is fed data on purchases by month. Such a system could errantly conclude that fraud increases in September as spending increases when really it’s just a function of end-of-year spending habits.

Even without these AI weaknesses, the main area the military should be concerned with at the moment is adversarial attacks. We must assume that potential adversaries will attempt to fool or break any accessible AI systems that we use. Attempts will be made to fool image-recognition engines and sensors; cyberattacks will try to evade intrusion-detection systems; and logistical systems will be fed altered data to clog the supply lines with false requirements.

Adversarial attacks can be separated into four categories: evasion, inference, poisoning, and extraction. It has been shown that these types of attacks are easy to accomplish and often don’t require computing skills. Evasion attacks attempt to fool an AI engine often in the hopes of avoiding detection—hiding a cyberattack, for example, or convincing a sensor that a tank is a school bus. The primary survival skill of the future may be the ability to hide from AI sensors. As a result, the military may need to develop a new type of AI camouflage to defeat AI systems because it’s been shown that simple obfuscation techniques such as strategic tape placement can fool AI. Evasion attacks are often proceeded by inference attacks that gain information about the AI system that can be used to enable evasion attacks. Poisoning attacks target AI systems during training to achieve their malicious intent. Here the threat would be enemy access to the datasets used to train our tools. Mislabeled images of vehicles to fool targeting systems or manipulated maintenance data designed to classify imminent system failure as normal operation may be inserted. Given the vulnerabilities of our supply chains, this would not be unimaginable and would be difficult to detect. Extraction attacks exploit access to the AI’s interface to learn enough about the AI’s operation to create a parallel model of the system. If our AIs are not secure from unauthorized users, then those users could predict decisions made by our systems and use those predictions to their advantage. One could envision an opponent predicting how an AI-controlled unmanned system will respond to certain visual and electromagnetic stimuli and thus influence its route and behavior.

The Path Forward for Military AI Usage

Artificial intelligence will certainly have a role in future military applications. It has many application areas where it will enhance productivity, reduce user workload, and operate more quickly than humans. Ongoing research will continue to improve its capability, explainability, and resilience. The military cannot ignore this technology. Even if we do not embrace it, certainly our opponents will, and we must be able to attack and defeat their AIs. However, we must resist the allure of this resurgent technology. Placing vulnerable AI systems in contested domains and making them responsible for critical decisions opens the opportunity for disastrous results. At this time, humans must remain responsible for key decisions.

Given the high probability that our exposed AI systems will be attacked and the current lack of resilience in AI technology, the best areas to invest in military AI are those that operate in uncontested domains. Artificial-intelligence tools that are closely supervised by human experts or that have secure inputs and outputs can provide value to the military while alleviating concerns about vulnerabilities. Examples of such systems are medical-imaging diagnostic tools, maintenance-failure prediction applications, and fraud-detection programs. All of these can provide value to the military while limiting the risk from adversarial attacks, biased data, context misunderstanding, and more. These are not the super tools sponsored by the AI salesmen of the world but are the ones most likely to have success in the near term.

Lt. Col (Ret) Paul Maxwell is the Cyber Fellow of Computer Engineering at the Army Cyber Institute at the United States Military Academy. He was a cyber and armor branch officer during his twenty-four years of service. He holds a PhD in electrical engineering from Colorado State University.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

No comments: