24 June 2018

Artificial Intelligence and International Affairs: Disruption Anticipated


This report examines some of the challenges for policymakers that may arise from the advancement and increasing application of AI. It draws together strands of thinking about the impact that AI may have on selected areas of international affairs – from military, human security and economic perspectives – over the next 10–15 years. The report sets out a broad framework to define and distinguish between the types of roles that artificial intelligence might play in policymaking and international affairs: these roles are identified as analytical, predictive and operational.


In analytical roles, AI systems might allow fewer humans to make higher-level decisions, or to automate repetitive tasks such as monitoring sensors set up to ensure treaty compliance. In these roles, AI may well change – and in some ways it has already changed – the structures through which human decision-makers understand the world. But the ultimate impact of those changes is likely to be attenuated rather than transformative.

Predictive uses of AI could have more acute impacts, though likely on a longer timeframe. Such employments may change how policymakers and states understand the potential outcomes of specific courses of action. This could, if such systems become sufficiently accurate and trusted, create a power gap between those actors equipped with such systems and those without – with notably unpredictable results.

Operational uses of AI are unlikely to fully materialize in the near term. The regulatory, ethical and technological hurdles to fully autonomous vehicles, weapons and other physical-world systems such as robotic personal assistants are very high – although rapid progress towards overcoming these barriers is being made. In the longer term, however, such systems could radically transform not only the way decisions are made but the manner in which they are carried out. 

Animation: Artificial Intelligence and the Future of Warfare

The report makes the following recommendations for governments and international non-governmental organizations, which will have a particularly important role in developing and advocating for new ethical norms: 

In the medium to long term, AI expertise must not reside in only a small number of countries – or solely within narrow segments of the population. Governments worldwide must invest in developing and retaining home-grown talent and expertise in AI if their countries are to be independent of the dominant AI expertise that is now typically concentrated in the US and China. And they should work to ensure that engineering talent is nurtured across a broad base in order to mitigate inherent bias issues.

Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals. The humanitarian sector could derive significant benefit from such systems, which might for example decrease response times in emergencies. Since AI for humanitarian purposes is unlikely to be immediately profitable for the private sector, however, a concerted effort needs to be made to develop them on a not-for-profit basis.

Understanding of the capacities and limitations of artificially intelligent systems must not be the exclusive preserve of technical experts. Better education and training on what AI is – and, critically, what it is not – should be made as broadly available as possible, while understanding of underlying ethical and policy goals should be a much higher priority to those developing the technologies.

Developing strong working relationships, particularly in the defence sector, between public and private AI developers is critical, as much of the innovation is taking place in the commercial sector. Ensuring that intelligent systems charged with critical tasks can carry them out safely and ethically will require openness between different types of institutions.
Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while its concurrent risks are well managed. In developing these codes of practice, policymakers and technologists should understand the ways in which regulating artificially intelligent systems may be fundamentally different from regulating arms or trade flows, while also drawing relevant lessons from those models.

Particular attention must be paid by developers and regulators to the question of human–machine interfaces. Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly, in order to avoid misunderstandings that in many applications could have serious consequences. 

Towards the end of the process of compiling this report, public attention has increasingly turned to the possibility of AI being used to support disinformation campaigns or interfere in democratic processes. We intend to focus on this area in subsequent work.
Further reading

No comments: