23 February 2019

A New Generation of Intelligence: National Security and Surveillance in the Age of AI

Alexander Babuta

Engaging in open debate will be crucial for the UK Intelligence Community to gain public trust regarding the use of artificial intelligence for national security purposes.

Speaking on the record to an invited audience at RUSI on 21 January 2019, GCHQ Deputy Director for Strategic Policy Paul Killworth described how Artificial Intelligence (AI) and Machine Learning (ML) have the potential to improve the effectiveness and efficiency of various intelligence functions. However, these capabilities bring with them complex legal and ethical considerations, and there is a strong public expectation that the UK’s intelligence agencies will act in a way that protects citizens’ rights and freedoms.

The national security community has expressed a desire to engage in a more open dialogue on these issues, with Killworth stressing that ‘it is absolutely essential that we have the debates around AI and machine learning in the national security space that will deliver the answers and approaches that will give us public consent’. However, it may prove difficult to provide sufficient reassurances to the public concerning national security uses of AI, due to understandably high levels of sensitivity. 


Public acceptance of intelligence agencies’ use of technology is driven by two conflicting sentiments. On the one hand, there is a high expectation that the national security community will protect citizens from threats to their safety, and adopt new methods that may allow them to do this more effectively. On the other hand, the public expects the agencies to adapt and innovate while protecting citizens’ rights and freedoms. Achieving this balance is a major challenge for those in the national security community, particularly at a time of such considerable technological change.
The Obligation to Innovate

It is clear why AI is an attractive prospect for a signals intelligence (SIGINT) agency. Machine learning has already revolutionised many sectors of the economy, and for many organisations the use of algorithms has become essential for the efficient extraction of meaningful insights from ever-expanding, disparate data sets. Similarly, the volume, velocity and complexity of digital data that the UK’s security and intelligence agencies (SIAs) are now required to process is far beyond the capacity of human analysts alone. 

Moreover, the SIAs have a legal and societal duty to protect the public from national security threats, and a reluctance to adopt new methods that may allow them to do this more effectively could be perceived to be a failure to fulfil this duty. As Killworth said at the RUSI event, ‘either we adapt to start using new techniques, or we become irrelevant’.

This ‘obligation to innovate’ is driven by two main factors. First, SIGINT agencies face a problem of ‘information overload’: while intelligence gathering capabilities have progressed considerably in recent years, technology to effectively process and analyse collected data has arguably failed to keep pace. The second consideration is the rapidly evolving nature of the threat landscape. The UK continues to face serious national security threats from a range of sources, and the SIAs’ use of new technology will be crucial to ensure they are able to keep pace with innovation in their adversaries’ capabilities.

Killworth explained that ‘within an organisation like GCHQ, there is a potential to use machine learning and AI to improve our operational outcomes. We can tackle these large problems and potentially deliver intelligence and security solutions to help keep the UK safe, in ways which we couldn’t do before’. Drawing on the example of the UK’s ‘active cyber defence’ system, he explained how ‘defending UK cyber security systems can be done in new ways using AI and machine learning, and in the future we will almost certainly have to do this, to keep up with the challenges we face. I can’t believe that we will be doing the active cyber defence work we do today in the future, without greater use of AI.’

The challenge ahead lies in ensuring that future technological innovation takes place within a clearly defined regulatory framework that gives the public reassurances that individual rights are being respected, while acknowledging that specific capabilities must remain secret.
Public Expectations of Privacy

Public attention is increasingly focused on the governance and regulation of data analytics. With the implementation of the Data Protection Act 2018 (which transposes into UK law the EU General Data Protection Regulation [GDPR]), consumers are now more aware than ever before of how personal data is collected and processed. The use of AI for national security purposes is likely to prove particularly controversial, given the potential intrusions of privacy and violations of civil liberties. Existing surveillance legislation, such as the Investigatory Powers Act 2016, does not impose AI-specific restrictions or safeguards, and many will likely question whether existing regulation is sufficient to account for the agencies’ use of AI.

This shift in public expectations is well recognised by GCHQ, with Killworth explaining how ‘we’ve got a society that has more robust expectations around human rights, public safety, transparency, scrutiny. There’s a challenge for intelligence officers like myself to explain and justify how we’re doing that in a way that perhaps previous generations never had to.’

However, while surveillance technology is often presented in dystopian terms, many national security uses of AI may not be as controversial as some might expect. In particular, it is important to note that AI has the potential to minimise potential privacy intrusions, by reducing the volume of personal data that needs to be reviewed by human analysts.

Either way, engaging in open debate will be essential for GCHQ to maintain public trust. Killworth described how ‘what we learned from Edward Snowden as an organisation was that when we’re dealing with technology, when we’re dealing with new ways of carrying out operations, it is absolutely essential that we engage with wider society, civil society, academia, interest groups, technologists’. He added that ‘if we don’t have those debates we don’t have trust. If we don’t have trust, ultimately we won’t have the laws and powers that enable us to conduct our business’.

This sentiment was echoed by Lord Evans, former Director General of the Security Service (MI5), who re-iterated at the RUSI event that ‘ultimately our national security depends upon the ability of the agencies and the police to win the operational battles with the terrorists and spies, but they can only do that with political and public consent.’

It is inevitable that certain groups and demographics will oppose all surveillance policy ‘by default’, due to differences in political or ideological beliefs. But by engaging in a more transparent and open dialogue regarding uses of new technology and expectations of privacy, the national security community may be able to gain the trust and consent necessary to enable them to continue operating effectively in a rapidly changing world.

No comments: