17 December 2019

DEFENCE-IN-DEPTH

David Whetham, Kennneth Payne
Source Link

While much public discussion of Artificial Intelligence (AI) is focused on Terminator-like killing machines, the official UK position is that there will always be a person in (or at least on – monitoring and able to intervene) the loop making any life-or-death decisions. Unfortunately, this rather ignores defensive uses of AI in many integrated systems where any human operator monitoring and intervening in a system would render that system too slow to be effective – there is no point in having human response times involved in near-light speed processes. This means that autonomy is likely to creep in through defensive systems, whatever the stated position of any government. And since there’s no hard and fast definition of a defensive weapon, and a powerful incentive not to be left behind, the spread of autonomous systems is most unlikely to stop there.

So what safeguards can be put in place? Simply relying on legal check lists will not be sufficient, as Admiral Woodward demonstrated in 1982 when he chose not to shoot down an aircraft approaching the British fleet with an apparently hostile profile and on an intercept course, despite having both ROE and legal permissions to act. The aircraft turned out to be a Brazilian airliner en route from Durban to Rio de Janeiro. The speed of decision-making required 40 years after the Falklands mean that such a decision might be made by an autonomous system – it is easy to imagine the humanitarian and political consequences if the British task force had shot down a civilian airliner.


While it is often assumed that people must be able to make better ethical choices than a machine or algorithm, it is more than possible that this simply isn’t true. Machines do not get tired (the single largest factor in bad ethical decision making), angry or emotional. Potentially, if all linked, machines can all learn from a single mistake almost immediately and ensure that it is never repeated (no need to learn from their own experience). It may be that AI could actually make more effective ethical decisions on our behalf than we are capable of in most situations on a day-to-day basis. Human experts may make the right decision in some situations, but they will be outperformed by anyone (or anything) that simply follows a set of straight forward rules in most situations. If one were simply aiming to raise the overall standard of ‘average’ decision-making, it might be more useful to focus on the majority of situations rather than the extreme ones and accept that machine can improve outcomes (this is an argument used for the introduction of driverless cars or removing pilots from aircraft cockpits altogether).

However, it is precisely those ‘extreme’ ones that make the difference. Whether that was Admiral Woodward choosing not to shoot down a passenger jet in 1982 despite the evidence and legal permissions in place, or Stanislav Petrov refusing to believe the erroneous data on his early warning screen telling him that a nuclear attack was underway against the USSR and therefore not reporting it (which would likely have resulted in a Soviet launch), it is not clear how you would develop an AI system that would continue to ask questions even when all of the criteria that had been considered pertinent had been met. And there’s another problem – how to code a consistent set of ethics that the machine could implement. For humans, there’s often a fudge factor involved in squaring moral tensions. Should we, for example, value doing the greatest good, or do we have an equal duty of care for all? There’s a degree of subjectivity involved, and our judgment may shift during the action, making it hard to code for ethics before the fact.

These concerns go beyond automated systems and get right to the heart of the way that we interact with technology in many different areas. AI offers enormous leaps in the ability to process huge amounts of data quickly and reach conclusions based on that information and this will only increase as quantum computing comes into use. AI is already starting to transform military activities, including logistics, intelligence and surveillance, and even weapons design. As a decision-making assistant, this offers us huge potential in the strategic environment as a way of seeing through the fog of war. AI applied to games has demonstrated that novel, game-winning strategies can be developed that, if applied to a contested military environment, might give an edge over an adversary.

Machine-bias (demonstrated every time we see someone driving into a river or getting stuck in a narrow lane while following their satnav) means that the answers generated are likely to be taken as definitive, even when they are very clearly wrong from any objective position. This misplaced confidence risks removing the psychological uncertainty that introduces caution into planning, the removal of which could be extremely dangerous in a conflict situation that ultimately has the potential to escalate into a nuclear exchange (see for example the Cuban missile crisis).

Military planners have been seeking the Holy Grail of being able to see through the fog of war that introduces so much uncertainty and doubt into military decision making. Whether this was the Revolution in Military Affairs or Network Centric Warfare, the tantalising promise of reducing friction has been extremely attractive. AI appears to offer the same thing today – machine learning and the ability to crunch huge amounts of data can generate innovative answers and do it quickly so one can get inside an opponents’ OODA loop in a way that provides a distinct relative advantage. However, caution is not always a bad thing (for examples, see the chapter on Patience in Military Virtues). Even without all the concerns of seeing inside the black box to be aware of inherent biases and potentially skewed assumptions in the AI’s calculations, we think the real risk is not the generation of the wrong answer, but the overwhelming temptation to act with confidence and certainty in response to ‘an answer’ even when caution is the appropriate course of action. Intelligent machines of today employ Bayesian reasoning and ‘fuzzy logic’ to weigh uncertainty. In future, we might even design them to be deliberately uncertain about our instructions, and task them with finding out exactly what we want before they act. Leading AI researcher Stuart Russell thinks that’s one way to avoid the ‘King Midas’ problem of machines delivering unintended consequences. Alternatively it might just turn them into indecisive ditherers. But at the other extreme, if we ourselves are excessively credulous and technophile, there’s danger of our machines aping our overconfidence.

Uncertainty has prevented many terrible wrongs being committed in the past. The dangers of removing it (or a least appearing to remove it by providing ‘the answer’) is something that we should all be wary of in the military environment.

No comments: