21 September 2019

The Global Expansion of AI Surveillance

STEVEN FELDSTEIN

Artificial intelligence (AI) technology is rapidly proliferating around the world. Startling developments keep emerging, from the onset of deepfake videos that blur the line between truth and falsehood, to advanced algorithms that can beat the best players in the world in multiplayer poker. Businesses harness AI capabilities to improve analytic processing; city officials tap AI to monitor traffic congestion and oversee smart energy metering. Yet a growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.

In order to appropriately address the effects of this technology, it is important to first understand where these tools are being deployed and how they are being used. Unfortunately, such information is scarce. To provide greater clarity, this paper presents an AI Global Surveillance (AIGS) Index—representing one of the first research efforts of its kind. The index compiles empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:


Which countries are adopting AI surveillance technology?
What specific types of AI surveillance are governments deploying?
Which countries and companies are supplying this technology?

KEY FINDINGS

AI surveillance technology is spreading at a faster rate to a wider range of countries than experts have commonly understood. At least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries).

China is a major driver of AI surveillance worldwide. Technology linked to Chinese companies—particularly Huawei, Hikvision, Dahua, and ZTE—supply AI surveillance technology in sixty-three countries, thirty-six of which have signed onto China’s Belt and Road Initiative (BRI). Huawei alone is responsible for providing AI surveillance technology to at least fifty countries worldwide. No other company comes close. The next largest non-Chinese supplier of AI surveillance tech is Japan’s NEC Corporation (fourteen countries).

Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment. These tactics are particularly relevant in countries like Kenya, Laos, Mongolia, Uganda, and Uzbekistan—which otherwise might not access this technology. This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.

But China is not the only country supplying advanced surveillance tech worldwide. U.S. companies are also active in this space. AI surveillance technology supplied by U.S. firms is present in thirty-two countries. The most significant U.S. companies are IBM (eleven countries), Palantir (nine countries), and Cisco (six countries). Other companies based in liberal democracies—France, Germany, Israel, Japan—are also playing important roles in proliferating this technology. Democracies are not taking adequate steps to monitor and control the spread of sophisticated technologies linked to a range of violations.

Liberal democracies are major users of AI surveillance. The index shows that 51 percent of advanced democracies deploy AI surveillance systems. In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology.1 Governments in full democracies are deploying a range of surveillance technology, from safe city platforms to facial recognition cameras. This does not inevitably mean that democracies are abusing these systems. The most important factor determining whether governments will deploy this technology for repressive purposes is the quality of their governance.

Governments in autocratic and semi-autocratic countries are more prone to abuse AI surveillance than governments in liberal democracies. Some autocratic governments—for example, China, Russia, Saudi Arabia—are exploiting AI technology for mass surveillance purposes. Other governments with dismal human rights records are exploiting AI surveillance in more limited ways to reinforce repression. Yet all political contexts run the risk of unlawfully exploiting AI surveillance technology to obtain certain political objectives.

There is a strong relationship between a country’s military expenditures and a government’s use of AI surveillance systems: forty of the world’s top fifty military spending countries (based on cumulative military expenditures) also use AI surveillance technology.2

The “Freedom on the Net 2018” report identified eighteen countries out of sixty-five that had accessed AI surveillance technology developed by Chinese companies.3 The AIGS Index shows that the number of those countries accessing Chinese AI surveillance technology has risen to forty-seven out of sixty-five countries in 2019.

NOTES

The AIGS Index presents a country-by-country snapshot of AI tech surveillance with the majority of sources falling between 2017 and 2019. Given the opacity of government surveillance use, it is nearly impossible to pin down by specific year which AI platforms or systems are currently in use.

The AIGS Index uses the same list of independent states included in the Varieties of Democracy (V-Dem) project with two exceptions, totaling 176.4 The V-Dem country list includes all independent polities worldwide but excludes microstates with populations below 250,000.

The AIGS Index does not present a complete list of AI surveillance companies operating in particular countries. The paper uses open source reporting and content analysis to derive its findings. Accordingly, there are certain built-in limitations. Some companies, such as Huawei, may have an incentive to highlight new capabilities in this field. Other companies have opted to downplay their association with surveillance technology and have purposely kept documents out of the public domain.

A full version of the index can be accessed online here: https://carnegieendowment.org/files/AI_Global_Surveillance_Index.pdf. An interactive map keyed to the index that visually depicts the global spread of AI surveillance technology can be accessed here: https://carnegieendowment.org/publications/interactive/ai-surveillance.

All reference source material used to build the index has been compiled into an open Zotero library. It is available here: https://www.zotero.org/groups/2347403/global_ai_surveillance/items.
INTRODUCING THE AI GLOBAL SURVEILLANCE (AIGS) INDEX

AI technology was once relegated to the world of science fiction, but today it surrounds us. It powers our smartphones, curates our music preferences, and guides our social media feeds. Perhaps the most notable aspect of AI is its sudden ubiquity.

In general terms, the goal of artificial intelligence is to “make machines intelligent” by automating or replicating behavior that “enables an entity to function appropriately and with foresight in its environment,” according to computer scientist Nils Nilsson.5 AI is not one specific technology. Instead, it is more accurate to think of AI as an integrated system that incorporates information acquisition objectives, logical reasoning principles, and self-correction capacities. An important AI subfield is machine learning, which is a statistical process that analyzes a large amount of information in order to discern a pattern to explain the current data and predict future uses.6 Several breakthroughs are making new achievements in the field possible: the maturation of machine learning and the onset of deep learning; cloud computing and online data gathering; a new generation of advanced microchips and computer hardware; improved performance of complex algorithms; and market-driven incentives for new uses of AI technology.7

Unsurprisingly, AI’s impact extends well beyond individual consumer choices. It is starting to transform basic patterns of governance, not only by providing governments with unprecedented capabilities to monitor their citizens and shape their choices but also by giving them new capacity to disrupt elections, elevate false information, and delegitimize democratic discourse across borders.

Steven Feldstein is a nonresident fellow in Carnegie’s Democracy, Conflict, and Governance Program, where he focuses on issues of democracy, technology, human rights, U.S. foreign policy, conflict trends, and Africa.

The focus of this paper is on AI surveillance and the specific ways governments are harnessing a multitude of tools—from facial recognition systems and big data platforms to predictive policing algorithms—to advance their political goals. Crucially, the index does not distinguish between AI surveillance used for legitimate purposes and unlawful digital surveillance. Rather, the purpose of the research is to shine a light on new surveillance capabilities that are transforming the ability of states—from autocracies to advanced democracies—to keep watch on individuals.
AIGS INDEX—METHODOLOGY

The AIGS Index provides a detailed empirical picture of global AI surveillance trends and describes how governments worldwide are using this technology. It addresses three primary questions:
Which countries are adopting AI surveillance technology?
What specific types of AI surveillance are governments deploying?
Which countries and companies are supplying this technology?

The AIGS Index is contained in Appendix 1. It includes detailed information for seventy-five countries where research indicates governments are deploying AI surveillance technology. The index breaks down AI surveillance tools into the following subcategories: 1) smart city/safe city, 2) facial recognition systems, and 3) smart policing. A full version of the index can be accessed online at https://carnegieendowment.org/files/AI_Global_Surveillance_Index.pdf. An interactive map keyed to the index that visually depicts the global spread of AI surveillance technology can be accessed at https://carnegieendowment.org/publications/interactive/ai-surveillance.

All reference source material used to build the index has been compiled into an open Zotero library. It is available at https://www.zotero.org/groups/2347403/global_ai_surveillance/items.

The majority of sources referenced by the index occur between 2017 and 2019. A small number of sources date as far back as 2012. The index uses the same list of countries found in the Varieties of Democracy (V-Dem) project with two minor exceptions.8 The V-Dem country list includes all independent polities worldwide but excludes microstates with populations below 250,000. The research collection effort combed through open-source material, country by country, in English and other languages, including news articles, websites, corporate documents, academic articles, NGO reports, expert submissions, and other public sources. It relied on systematic content analysis for each country incorporating multiple sources to determine the presence of relevant AI surveillance technology and corresponding companies. Sources were categorized into tiered levels of reliability and accuracy. First-tier sources include major print and news magazine outlets (such as the New York Times, Economist, Financial Times, and Wall Street Journal). Second-tier sources include major national media outlets. Third-tier sources include web articles, blog posts, and other less substantiated sourcing; these were only included after multiple corroboration.

Given limited resources and staffing constraints (one full-time researcher plus volunteer research assistance), the index is only able to offer a snapshot of AI surveillance levels in a given country. It does not provide a comprehensive assessment of all relevant technology, government surveillance uses, and applicable companies. Because research relied primarily on content analysis and literature reviews to derive its findings, there are certain built-in limitations. Some companies, such as Huawei, may have an incentive to highlight new capabilities in this field. Other companies may wish to downplay links to surveillance technology and purposely keep documents out of the public domain.

Field-based research involving on-the-ground information collection and verification would be useful to undertake. A number of countries—such as Angola, Azerbaijan, Belarus, Hungary, Peru, Sri Lanka, Tunisia, and Turkmenistan—provided circumstantial or anecdotal evidence of AI surveillance, but not enough verifiable data to warrant inclusion in the index.

A major difficulty was determining which AI technologies should be included in the index. AI technologies that directly support surveillance objectives—smart city/safe city platforms, facial recognition systems, smart policing systems—are included in the index. Enabling technologies that are critical to AI functioning but not directly responsible for surveillance programs are not included in the index.

Another data collection challenge is that governments (and many companies) purposely hide their surveillance capabilities. As such, it is difficult to precisely determine the extent to which states are deploying algorithms to support their surveillance objectives, or whether AI use is more speculative than real.

The index does not differentiate between governments that expansively deploy AI surveillance techniques versus those that use AI surveillance to a much lesser degree (for example, the index does not include a standardized interval scale correlating to levels of AI surveillance). This is by design. Because this is a nascent field and there is scant information about how different countries are using AI surveillance techniques, attempting to score a country’s relative use of AI surveillance would introduce a significant level of researcher bias. Instead, a basic variable was used: is there documented presence of AI surveillance in a given country? If so, what types of AI surveillance technology is the state deploying? Future research may be able to assess and analyze levels of AI surveillance on a cross-comparative basis.

Finally, instances of AI surveillance documented in the index are not specifically tied to harmful outcomes. The index does not differentiate between unlawful and legitimate surveillance. In part, this is because it is exceedingly difficult to determine what specifically governments are doing in the surveillance realm and what the associated impacts are; there is too much that is unknown and hidden.
FINDINGS AND THREE KEY INSIGHTS

The findings indicate that at least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries). Three key insights emerge from the AIGS Index’s findings.

First, global adoption of AI surveillance is increasing at a rapid pace around the world. Seventy-five countries, representing 43 percent of total countries assessed, are deploying AI-powered surveillance in both lawful and unlawful ways. The pool of countries is heterogeneous—they come from all regions, and their political systems range from closed autocracies to advanced democracies. The “Freedom on the Net 2018” report raised eyebrows when it reported that eighteen out of sixty-five assessed countries were using AI surveillance technology from Chinese companies.9 The report’s assessment period ran from June 1, 2017 to May 31, 2018. One year later, the AIGS Index finds that forty-seven countries out of that same group are now deploying AI surveillance technology from China.

Unsurprisingly, countries with authoritarian systems and low levels of political rights are investing heavily in AI surveillance techniques. Many governments in the Gulf, East Asia, and South/Central Asia are procuring advanced analytic systems, facial recognition cameras, and sophisticated monitoring capabilities. But liberal democracies in Europe are also racing ahead to install automated border controls, predictive policing, safe cities, and facial recognition systems. In fact, it is striking how many safe city surveillance case studies posted on Huawei’s website relate to municipalities in Germany, Italy, the Netherlands, and Spain.

Regionally, there are clear disparities. The East Asia/Pacific and the Middle East/North Africa regions are robust adopters of these tools. South and Central Asia and the Americas also demonstrate sizable take-up of AI surveillance instruments. Sub-Saharan Africa is a laggard—less than one-quarter of its countries are invested in AI surveillance. Most likely this is due to technological underdevelopment (African countries are struggling to extend broadband access to their populations; the region has eighteen of twenty countries with the lowest levels of internet penetration).10 Given the aggressiveness of Chinese companies to penetrate African markets via BRI, these numbers will likely rise in the coming years. Figure 1 shows the percentage breakdown by region of countries adopting AI surveillance.


Second, China is a major supplier of AI surveillance. Technology linked to Chinese companies are found in at least sixty-three countries worldwide. Huawei alone is responsible for providing AI surveillance technology to at least fifty countries. There is also considerable overlap between China’s Belt and Road Initiative and AI surveillance—thirty-six out of eighty-six BRI countries also contain significant AI surveillance technology. However, China is not the only country supplying advanced surveillance technology. France, Germany, Japan, and the United States are also major players in this sector. U.S. companies, for example, have an active presence in thirty-two countries. Figure 2 breaks down the leading companies in the sector.


Third, liberal democracies are major users of AI surveillance. The index shows that 51 percent of advanced democracies deploy AI surveillance systems. In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. Liberal democratic governments are aggressively using AI tools to police borders, apprehend potential criminals, monitor citizens for bad behavior, and pull out suspected terrorists from crowds. This doesn’t necessarily mean that democracies are using this technology unlawfully. The most important factor determining whether governments will exploit this technology for repressive purposes is the quality of their governance—is there an existing pattern of human rights violations? Are there strong rule of law traditions and independent institutions of accountability? That should provide a measure of reassurance for citizens residing in democratic states.

But advanced democracies are struggling to balance security interests with civil liberties protections. In the United States, increasing numbers of cities have adopted advanced surveillance systems. A 2016 investigation by Axios’s Kim Hart revealed, for example, that the Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents: “From a plane flying overhead, powerful cameras capture aerial images of the entire city. Photos are snapped every second, and the plane can be circling the city for up to 10 hours a day.”11 Baltimore’s police also deployed facial recognition cameras to monitor and arrest protesters, particularly during 2018 riots in the city.12 The ACLU condemned these techniques as the “technological equivalent of putting an ankle GPS [Global Positioning Service] monitor on every person in Baltimore.”13

On the U.S.-Mexico border, an array of hi-tech companies also purvey advanced surveillance equipment. Israeli defense contractor Elbit Systems has built “dozens of towers in Arizona to spot people as far as 7.5 miles away,” writes the Guardian’s Olivia Solon. Its technology was first perfected in Israel from a contract to build a “smart fence” to separate Jerusalem from the West Bank. Another company, Anduril Industries, “has developed towers that feature a laser-enhanced camera, radar and a communications system” that scans a two-mile radius to detect motion. Captured images “are analysed using artificial intelligence to pick out humans from wildlife and other moving objects.”14 It is unclear to what extent these surveillance deployments are covered in U.S. law, let alone whether these actions meet the necessity and proportionality standard.

The United States is not the only democracy embracing AI surveillance. In France, the port city of Marseille initiated a partnership with ZTE in 2016 to establish the Big Data of Public Tranquility project. The goal of the program is to reduce crime by establishing a vast public surveillance network featuring an intelligence operations center and nearly one thousand intelligent closed-circuit television (CCTV) cameras (the number will double by 2020). Local authorities trumpet that this system will make Marseille “the first ‘safe city’ of France and Europe.”15 Similarly, in 2017, Huawei “gifted” a showcase surveillance system to the northern French town of Valenciennes to demonstrate its safe city model. The package included upgraded high definition CCTV surveillance and an intelligent command center powered by algorithms to detect unusual movements and crowd formations.16

The fact that so many democracies—as well as autocracies—are taking up this technology means that regime type is a poor predictor for determining which countries will adopt AI surveillance.

A better predictor for whether a government will procure this technology is related to its military spending. A breakdown of military expenditures in 2018 shows that forty of the top fifty military spending countries also have AI surveillance technology.17 These countries span from full democracies to dictatorial regimes (and everything in between). They comprise leading economies like France, Germany, Japan, and South Korea, and poorer states like Pakistan and Oman. This finding is not altogether unexpected; countries with substantial investments in their militaries tend to have higher economic and technological capacities as well as specific threats of concern. If a country takes its security seriously and is willing to invest considerable resources in maintaining robust military-security capabilities, then it should come as little surprise that the country will seek the latest AI tools. The motivations for why European democracies acquire AI surveillance (controlling migration, tracking terrorist threats) may differ from Egypt or Kazakhstan’s interests (keeping a lid on internal dissent, cracking down on activist movements before they reach critical mass), but the instruments are remarkably similar. Future research might examine country-level internal security figures and compare them to levels of AI surveillance.
DISTINGUISHING BETWEEN LEGITIMATE AND UNLAWFUL SURVEILLANCE

State surveillance is not inherently unlawful. Governments have legitimate reasons to undertake surveillance that is not rooted in a desire to enforce political repression and limit individual freedoms. For example, tracking tools play a vital role in preventing terrorism. They help security forces deter bad acts and resolve problematic cases. They give authorities the ability to monitor critical threats and react accordingly. But technology has changed the nature of how governments carry out surveillance and what they choose to monitor. The internet has proliferated the amount of transactional data or “metadata” available about individuals, such as information about sent and received emails, location identification, web-tracking, and other online activities. As former UN special rapporteur Frank La Rue noted in a milestone 2013 surveillance report:

Communications data are storable, accessible and searchable, and their disclosure to and use by State authorities are largely unregulated. Analysis of this data can be both highly revelatory and invasive, particularly when data is combined and aggregated. As such, States are increasingly drawing on communications data to support law enforcement or national security investigations. States are also compelling the preservation and retention of communication data to enable them to conduct historical surveillance.18

It goes without saying that such intrusions profoundly affect an individual’s right to privacy—to not be subjected to what the Office of the UN High Commissioner for Human Rights (OHCHR) called “arbitrary or unlawful interference with his or her privacy, family, home or correspondence.”19 Surveillance likewise may infringe upon an individual’s right to freedom of association and expression. Under international human rights law, three principles are critical to assessing the lawfulness of a particular surveillance action.

First, does domestic law allow for surveillance? La Rue’s successor, David Kaye, issued a report in 2019 that affirmed that legal regulations should be “formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be made accessible to the public.” Legal requirements should not be “vague or overbroad,” which would allow unconstrained discretion to government officials. The legal framework itself should be “publicly accessible, clear, precise, comprehensive and non-discriminatory.”20

Second, does the surveillance action meet the “necessity and proportionality” international legal standard, which restricts surveillance to situations that are “strictly and demonstrably necessary to achieve a legitimate aim”?21

Third, are the interests justifying the surveillance action legitimate? Disagreements abound when it comes to determining what constitutes legitimate surveillance and what is an abuse of power. While governments commonly justify surveillance on national security or public order grounds, the OHCHR warns that such restrictions may “unjustifiably or arbitrarily” restrict citizens’ rights to freedom of opinion and expression. It contends that legitimate surveillance requires states to “demonstrate the risk that specific expression poses to a definite interest in national security or public order,” and that a “robust, independent oversight system” that entrusts judiciaries to authorize relevant surveillance measures and provide remedies in cases of abuse is required.22 Kaye adds that legitimate surveillance should only apply when the interest of a “whole nation is at stake,” and should exclude surveillance carried out “in the sole interest of a Government, regime or power group.”23

The legal standards required to legitimately carry out surveillance are high, and governments struggle to meet them. Even democracies with strong rule of law traditions and robust oversight institutions frequently fail to adequately protect individual rights in their surveillance programs. Countries with weak legal enforcement or authoritarian systems “routinely shirk these obligations.”24 As the OHCHR’s inaugural report on privacy in the digital age concludes, states with “a lack of adequate national legislation and/or enforcement, weak procedural safeguards and ineffective oversight” bring reduced accountability and heightened conditions for unlawful digital surveillance.25

AI surveillance exacerbates these conditions and makes it likelier that democratic and authoritarian governments may carry out surveillance that contravenes international human rights standards. Frank La Rue explains: “Technological advancements mean that the State’s effectiveness in conducting surveillance is no longer limited by scale or duration. Declining costs of technology and data storage have eradicated financial or practical disincentives to conducting surveillance. As such, the State now has a greater capability to conduct simultaneous, invasive, targeted and broad-scale surveillance than ever before.”26

AI surveillance in particular offers governments two major capabilities. One, AI surveillance allows regimes to automate many tracking and monitoring functions formerly delegated to human operators. This brings cost efficiencies, decreases reliance on security forces, and overrides potential principal-agent loyalty problems (where the very forces operating at the behest of the regime decide to seize power for themselves).

Two, AI technology can cast a much wider surveillance net than traditional methods. Unlike human operatives “with limited reserves of time and attention,” AI systems never tire or fatigue.27 As a result, this creates a substantial “chilling effect” even without resorting to physical violence; citizens never know if an automated bot is monitoring their text messages, reading their social media posts, or geotracking their movements around town.28

This paper recognizes that AI surveillance technology is “value neutral.” In and of themselves, these tools do not foment repression, and their presence does not mean that a government is using them for antidemocratic purposes. The index does not specify, country-by-country, whether these instruments are being used by governments in lawful or illegitimate manners. Rather, the purpose of the index is to identify which countries possess sufficiently advanced tools that allow them to pursue a range of surveillance objectives.
HOW MUCH IS CHINA DRIVING THE SPREAD OF AI SURVEILLANCE?

Empirically, the AIGS Index shows that Chinese companies—led by Huawei—are leading suppliers of AI surveillance around the world. Overall, China is making a sustained push for leadership and primacy in AI.29 A growing consensus singles out China as a global driver of “authoritarian tech.” Experts claim that Chinese companies are working directly with Chinese state authorities to export “authoritarian tech” to like-minded governments in order to spread influence and promote an alternative governance model.30 But is this accurate?

There is some truth to this argument—a subset of Chinese exports goes directly to countries like Zimbabwe and Venezuela that are gross human rights violators and which would otherwise be unable to access such technology. But AI surveillance is not solely going from one authoritarian country (China) to other authoritarian states. Rather, transfers are happening in a much more heterogeneous fashion. China is exporting surveillance tech to liberal democracies as much as it is targeting authoritarian markets. Likewise, companies based in liberal democracies (for example, Germany, France, Israel, Japan, South Korea, the UK, the United States) are actively selling sophisticated equipment to unsavory regimes.

Saudi Arabia is a good case in point. Huawei is helping the government build safe cities, but Google is establishing cloud servers, UK arms manufacturer BAE has sold mass surveillance systems, NEC is vending facial recognition cameras, and Amazon and Alibaba both have cloud computing centers in Saudi Arabia and may support a major smart city project.31 The index shows that repressive countries rarely procure such technology from a single source. In Thailand, government officials repeatedly emphasized the importance of “foreign policy balancing” and not affiliating too strongly with any one side: “Always been that way. That’s why we’re still a kingdom. We compromise, we negotiate, and we balance.”32

That being said, there are special reasons why experts are applying greater scrutiny to Chinese companies. Huawei is the leading vendor of advanced surveillance systems worldwide by a huge factor. Its technology is linked to more countries in the index than any other company. It is aggressively seeking new markets in regions like sub-Saharan Africa. Huawei is not only providing advanced equipment but also offering ongoing technological support to set up, operate, and manage these systems.

A recent investigative report by the Wall Street Journal provides an eye-opening example. The reporters found that Huawei technicians in both Uganda and Zambia helped government officials spy on political opponents. This included “intercepting their encrypted communications and social media, and using cell data to track their whereabouts.” Not only did Huawei employees play a “direct role in government efforts to intercept the private communications of opponents,” but they also encouraged Ugandan security officials to travel to Algeria so they could study Huawei’s “intelligent video surveillance system” operating in Algiers.33 Uganda subsequently agreed to purchase a similar facial recognition surveillance system from Huawei costing $126 million.34

The Australian Strategic Policy Institute’s project on Mapping China’s Tech Giants indicates that Huawei is responsible for seventy-five “smart city-public security projects,” and has seen a colossal increase in its business line: “In 2017, Huawei listed 40 countries where its smart-city technologies had been introduced; in 2018, that reach had reportedly more than doubled to 90 countries (including 230 cities).”35 Huawei is directly pitching the safe city model to national security agencies, and China’s Exim Bank appears to be sweetening the deal with subsidized loans. The result is that a country like Mauritius obtains long-term financing from the Chinese government, which mandates contracting with Chinese firms.36 The Mauritian government then turns to Huawei as the prime contractor or sub-awardee to set up the safe city and implement advanced surveillance controls.

It is also increasingly clear that firms such as Huawei operate with far less independence from the Chinese government than they claim. Huawei was founded in 1987 by Ren Zhengfei, a former officer in the People’s Liberation Army who served in its “military technology division,” Anna Fifield at the Washington Post has noted.37 There are consistent reports that Huawei receives significant subsidies from the Chinese government.38 There also appear to be strong connections between Huawei’s leadership and China’s security and intelligence apparatus. Sun Yafang, for example, chairwoman of Huawei’s board from 1999 to 2018, once worked in China’s Ministry of State Security.39 Max Chafkin and Joshua Brustein reported in Bloomberg Businessweek that there are allegations that Ren may have been a “high-ranking Chinese spymaster and indeed may still be.”40 Experts maintain that the Chinese Communist Party increasingly is establishing “party ‘cells’ in private companies to enable enhanced access and control.41 Huawei has publicly averred that it would “definitely say no” to any demands by the Chinese government to hand over user data.42 But this contravenes a 2015 Chinese national security law that mandates companies to allow third-party access to their networks and to turn over source code or encryption keys upon request.43 Huawei’s declared ownership structure is remarkably opaque. A recent academic study by Christopher Balding and Donald C. Clarke concluded that 99 percent of Huawei shares are controlled by a “trade union committee,” which in all likelihood is a proxy for Chinese state control of the company.”44

Even if Chinese companies are making a greater push to sell advanced surveillance tech, the issue of intentionality remains perplexing—to what extent are Chinese firms like Huawei and ZTE operating out of their own economic self-interest when peddling surveillance technology versus carrying out the bidding of the Chinese state? At least in Thailand, recent research interviews did not turn up indications that Chinese companies are pushing a concerted agenda to peddle advanced AI surveillance equipment or encourage the government to build sophisticated monitoring systems. An official from Thailand’s Ministry of Interior noted that while AI technology is “out there” and something the government is thinking more about, “China hasn’t offered any AI. It doesn’t give AI—Thais have to ask.”45 The smart city/safe city model also garnered skepticism. Somkiat Tangkitvanich, a leading technology expert in Thailand, commented, “the idea of a smart city is a joke.” He relayed a recent conversation he had with Thailand’s information and communications technologies (ICT) minister: “He [the minister] boasted about the smart city in Phuket. . . . He told me that we are thinking about giving wristbands to tourists so that we can track them, we can help them. Something like that. But it’s not really implemented. Smart city in Phuket turns out to be providing free Wi-Fi and internet to tourists!”46 This serves as a useful reminder that more on-the-ground research is needed to separate hyperbole from fact in this area.
TYPES OF AI SURVEILLANCE

The following sections will describe key AI surveillance techniques and how governments worldwide are deploying them to support specific policy objectives.

States use AI technology to accomplish a broad range of surveillance goals. This section details three primary AI surveillance tools incorporated in the AIGS Index: smart city/safe city platforms, facial recognition systems, and smart policing. It also describes enabling technologies—such as cloud computing and Internet of Things (IOT) networks—that are integral for AI surveillance tools to function. Enabling technologies are not incorporated in the index.

Importantly, AI surveillance is not a standalone instrument of repression. It forms part of a suite of digital repression tools—information and communications technologies used to surveil, intimidate, coerce, and harass opponents in order to inflict a penalty on a target and deter specific activities or beliefs that challenge the state.47 (See Appendix 2 for more information.) Table 1 summarizes each technique and its corresponding level of global deployment.

SMART CITIES/SAFE CITIES

The World Bank describes smart cities as “technology-intensive” urban centers featuring an array of sensors that gather information in real time from “thousands of interconnected devices” in order to facilitate improved service delivery and city management.48 They help municipal authorities manage traffic congestion, direct emergency vehicles to needed locations, foster sustainable energy use, and streamline administrative processes. But there is growing concern that smart cities are also enabling a dramatic increase in public surveillance and intrusive security capabilities. IBM, one of the original coiners of the term, designed a brain-like municipal model where information relevant to city operations could be centrally processed and analyzed.49 A key component of IBM’s smart city is public safety, which incorporates an array of sensors, tracking devices, and surveillance technology to increase police and security force capabilities.

Huawei has been up-front about trumpeting public safety technologies for smart cities. It is marketing “safe cities” to law enforcement communities to “predict, prevent, and reduce crime” and “address new and emerging threats.”50 In a 2016 white paper, Huawei describes a “suite of technology that includes video surveillance, emergent video communication, integrated incident command and control, big data, mobile, and secured public safety cloud” to support local law enforcement and policing as well as the justice and corrections system.51 Huawei explicitly links its safe city technology to confronting regional security challenges, noting that in the Middle East, its platforms can prevent “extremism”; in Latin America, safe cities enable governments to reduce crime; and that in North America, its technology will help the United States advance “counterextremism” programs.52

How do these platforms work in practice to advance surveillance goals? The IT firm Gartner, which partners with Microsoft on smart cities, provides an example:

Saudi Arabia’s Makkah Region Development Authority (MRDA) created a crowd-control system to increase safety and security of Hajj pilgrims. Data is collected via a wristband embedding identity information, special healthcare requirements and a GPS. In addition, surveillance cameras are installed to collect and analyze real-time video along the Al Mashaaer Al Mugaddassah Metro Southern Line (MMMSL), as well as in the holy sites, such as Great Mosque of Mecca, Mount Arafat, Jamarat and Mina.53

Unsurprisingly, such systems lend themselves to improper use. Recently, Huawei’s safe city project in Serbia, which intends to install 1,000 high-definition (HD) cameras with facial recognition and license plate recognition capabilities in 800 locations across Belgrade, sparked national outrage.54 Huawei posted a case study (since removed) about the benefits of safe cities and described how similar surveillance technology had facilitated the apprehension of a Serbian hit-and-run perpetrator who had fled the country to a city in China: “Based on images provided by Serbian police, the . . . [local] Public Security Bureau made an arrest within three days using new technologies.”55 Rather than applaud the efficiency of the system, Serbian commentators observed that in a country racked by endemic corruption and encroaching authoritarianism, such technology offers a powerful tool for Serbian authorities to curb dissent and perpetrate abuses.

Smart city platforms with a direct public security link are found in at least fifty-six of seventy-five countries with AI surveillance technology.
FACIAL RECOGNITION SYSTEMS

Facial recognition is a biometric technology that uses cameras—both video or still images—to match stored or live footage of individuals with images from a database. Not all facial recognition systems focus on individual identification via database matching. Some systems are designed to assess aggregate demographic trends or to conduct broader sentiment analysis via facial recognition crowd scanning.

Unlike ordinary CCTV, which has been a mainstay of police forces for twenty-five years, facial recognition cameras are much more intrusive. They can scan distinctive facial features in order to create detailed biometric maps of individuals without obtaining consent. Often facial recognition surveillance cameras are mobile and concealable. For example, security forces in Malaysia have entered into a partnership with the Chinese tech company Yitu to equip officers with facial recognition body cameras. This will allow security officials to “rapidly compare images caught by live body cameras with images from a central database.”56

Huawei is a major purveyor of facial recognition video surveillance, particularly as part of its safe city platforms. It describes the technology’s benefits in the Kenya Safe City project:

As part of this project, Huawei deployed 1,800 HD cameras and 200 HD traffic surveillance systems across the country’s capital city, Nairobi. A national police command center supporting over 9,000 police officers and 195 police stations was established to achieve monitoring and case-solving. The system worked during Pope Francis’ visit to Kenya in 2015, where more than eight million people welcomed his arrival. With Huawei’s HD video surveillance and a visualized integrated command solution, the efficiency of policing efforts as well as detention rates rose significantly.57

Experts detail several concerns associated with facial recognition.

First, few rules govern access to and the use of image databases (repositories that store captured images from facial recognition cameras). How governments use this information, how long images are stored, and where authorities obtain such images in the first place are opaque issues and vary by jurisdiction. Recent disclosures that U.S. law enforcement agencies (the Federal Bureau of Investigation and Immigration and Customs Enforcement) scanned through millions of photos in state driver’s license databases without prior knowledge or consent come as little surprise. The vacuum of legal checks and balances has led to a “surveillance-first, ask-permission-later system,” Drew Harrell noted in the Washington Post.58

Second, the accuracy of facial recognition technology varies significantly. Certain tests have disclosed unacceptably high false-match rates. A recent independent report of the UK’s Metropolitan Police found that its facial recognition technology had an extraordinary error rate of 81 percent.59 Similarly, Axon, a leading supplier of police body cameras in the United States, announced that it would cease offering facial recognition on its devices. Axon’s independent ethics board stated: “Face recognition technology is not currently reliable enough to ethically justify its use.”60

But other assessments demonstrate much more favorable results. Evaluations conducted between 2014 and 2018 of 127 algorithms from thirty-nine developers by the U.S. National Institute for Standards and Technology showed that “facial recognition software got 20 times better at searching a database to find a matching photograph.” The failure rate in the same period dropped from 4.0 percent to 0.2 percent.61

One reason for the discrepancy is that under ideal conditions, facial recognition can perform very well. But when unexpected variables are thrown in—poor weather or fuzzy database images—then failure rates start to shoot up. Facial recognition technology also has been unable to shake consistent gender and racial biases, which lead to elevated false positives for minorities and women—“the darker the skin, the more errors arise—up to nearly 35 percent for images of darker skinned women” noted Steve Lohr in the New York Times.62

Citizens are starting to fight back against facial recognition systems. Protesters in Hong Kong, for example, have covered up their faces and disabled their smartphone facial recognition logins to prevent law enforcement access. They have also turned the tables on the police by taking pictures of unbadged officers and using facial recognition image searching to expose the officers’ identities online.63

Facial recognition systems are rapidly spreading around the world. The index identifies at least sixty-four countries that are actively incorporating facial recognition systems in their AI surveillance programs.
SMART POLICING

The idea behind smart policing is to feed immense quantities of data into an algorithm—geographic location, historic arrest levels, types of committed crimes, biometric data, social media feeds—in order to prevent crime, respond to criminal acts, or even to make predictions about future criminal activity. As Privacy International notes: “With the proliferation of surveillance cameras, facial recognition, open source and social media intelligence, biometrics, and data emerging from smart cities, the police now have unprecedented access to massive amounts of data.” Therefore, one major component to smart policing is to create automated platforms that can disaggregate immense amounts of material, facilitate data coming in from multiple sources, and permit fine-tuned collection of individual information.

One area that has received considerable recent attention is predictive policing. The technique accelerated in the United States after the National Institute of Justice started issuing grants for pilot predictive policing projects in 2009. At its core, these programs claim to predict with remarkable accuracy, based on massive data aggregation, where future crimes will be committed and which individuals are likely to commit those crimes. Predictive policing has exploded in popularity. The PredPol predictive analytics program, for example, is deployed “by more than 60 police departments around the country.”64

But there are growing concerns about algorithmic bias and prejudice, as well as the effectiveness of these predictions. Recent reporting by Caroline Haskins for Vice describes how PredPol’s predictive crime forecasting algorithm operates. Predpol’s software generates crime forecasts for police officers “on a scale as small as 500 by 500 square feet,” which can pinpoint specific houses. It assumes that “certain crimes committed at a particular time are more likely to occur in the same place in the future.”65 PredPol reveals that “historical event datasets are used to train the algorithm for each new city (ideally 2 to 5 years of data). PredPol then updates the algorithm each day with new events as they are received from the department.” New predictions are highlighted in special red boxes superimposed on Google Maps representing high-risk areas that warrant special attention from police patrols.66 A key shortcoming in PredPol’s methodology is that it generates future predictions based on data from past criminal activity and arrests. Certain minority neighborhoods that have suffered from “overpolicing” and biased police conduct show up with higher frequency in PredPol’s dashboard. This may not represent fine-tuned algorithmic crime prediction as much as it involves the perpetuation of structurally biased policing.

China has enthusiastically embraced predictive policing as part of its Xinjiang crackdown. Human Rights Watch reports on the creation of an Integrated Joint Operations Platform (IJOP), which collects data from CCTV cameras, facial recognition devices, and “wifi sniffers” (devices that eavesdrop on activities or communications within wireless networks). IJOP procures additional data from license plates and identification cards scanned at checkpoints, as well as health, banking, and legal records.67 Chinese authorities are supplementing IJOP with mandatory DNA samples from all Xinjiang residents aged twelve to sixty-five.68 This information is fed into IJOP computers, and algorithms sift through troves of data looking for threatening patterns. Once IJOP flags an individual, that person is picked up by security forces and detained for questioning.69

Smart policing techniques are used in at least fifty-three of seventy-five countries with AI surveillance.
AI SURVEILLANCE ENABLING TECHNOLOGIES

A second category of technology is not directly responsible for supporting surveillance programs, but provides critical capabilities that are essential for implementing applications. Advanced video surveillance and facial recognition cameras could not function without cloud computing capabilities. As one expert put it, if video surveillance is the “eyes” then cloud services are the “brains” that “connect cameras and hardware to the cloud computing models via 5G networks.”70 However, cloud computing in isolation is not inherently oriented toward surveillance. Therefore, these secondary technologies are placed in an “enabling technologies” category and described below.71 They are not included in the AIGS Index.

AUTOMATED BORDER CONTROL SYSTEMS

These are found primarily in international airports and border crossings. According to the consulting firm Accenture, ABC systems use “multi-model biometric matching”—facial image recognition combined with e-passports or other biometric documents—to process passengers.72 The process initiates when a passenger steps in front of a multi-camera wall. Digital mirrors located adjacent to the cameras attract passengers’ eyes for image capture. A risk assessment is then performed through automated testing of identities against an individual’s passport and certain security watch-lists.73 Those who are not cleared by the automated system must go into secondary screening with human agents.

Governments are piloting new features, such as automated lie detection technology, in ABC systems. For example, the European Union is testing a technology called iBorderCtrl in three countries—Greece, Hungary, and Latvia—to screen migrants at border crossings. Individuals are asked questions about their countries of origin and circumstances of departure. The answers are then evaluated by an AI-based lie-detecting system.74 Travelers found to have honestly answered questions are given a code allowing them to cross. All others are transferred to human border guards for additional questioning. The technology behind iBorderCtrl is based on “affect recognition science,” which purports to read facial expressions and infer emotional states in order to render legal judgments or policy decisions. Psychologists have widely criticized these tools, maintaining that it is difficult to rely on facial expressions alone to accurately determine a person’s state of mind.75 Despite scientific skepticism about these techniques, governments continue to explore their use.
CLOUD COMPUTING

Governments and companies are increasingly storing data in massive off-site locations—known as the cloud—that are accessible through a network, usually the internet.76 Cloud computing is a general use technology that includes everything from turn-by-turn GPS maps, social network and email communications, file storage, and streaming content access. The National Institute of Standards and Technology defines cloud computing as a “model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”77 In basic terms, cloud computing data centers function as the backbone of the internet, instantly storing, communicating, and transporting produced information. As such, cloud computing is essential to effectively running AI systems. Microsoft, IBM, Amazon, Huawei, and Alibaba have all established these data centers to facilitate AI operations.

A growing number of countries have fully embraced cloud computing and outsourced all of their data storage needs to a single corporate platform. In 2018, for example, Iceland signed a service agreement with Microsoft for the company to be the sole IT supplier for the country’s entire public sector.78 The cloud computing trend is not without problems. For one, cloud servers present enticing targets for cyber hackers. Security firms like NSO Group claim they are able to penetrate cloud servers and access a target’s “location data, archived messages or photos,” leading many to question whether cloud computing companies can keep personal information, corporate secrets, classified government material, or health records safe (however they generally represent a more secure method of storage than legacy on-site data storage facilities).79 A related concern is forced data disclosures—even if cloud servers remain technically secure, governments may coerce companies into disclosing certain data (such as email communications or text messages of regime critics) held in the cloud.
INTERNET OF THINGS

The IOT is based on the reality that more and more devices will be connected to each other via the internet, allowing data to be shared for analytic processing in the cloud.80 A major IOT hurdle is lack of interoperability between devices. At present, iPhones, Alexa speakers, Nest thermostats, and OnStar auto systems function from different platforms and use different information sources. The IOT’s goal is to “help tame this Tower of Babel” and ensure device integration and data aggregation (although companies like Amazon, Apple, and Google are also setting up distinct ecosystems that only have limited interoperability with other platforms).81 While the IOT will bring greater efficiencies, it may also transform traditional non-networked devices, such as smart speakers, into omnipresent surveillance instruments:

The Internet of Things promises a new frontier for networking objects, machines, and environments in ways that we [are] just beginning to understand. When, say, a television has a microphone and a network connection, and is reprogrammable by its vendor, it could be used to listen in to one side of a telephone conversation taking place in its room—no matter how encrypted the telephone service itself might be. These forces are on a trajectory towards a future with more opportunities for surveillance.82

Controversy surrounding IOT technology is growing. In early 2019, Amazon disclosed that thousands of its workers listened to conversations recorded by Echo smart speakers. In some cases, its workers debated whether recordings of possible crimes should be turned over to law enforcement authorities.83 Amazon analyzed these transcripts without the knowledge or consent of its customers. Similarly, it came to light that Google and Facebook contractors have been regularly listening to recordings between their platforms and individual consumers.84

IOT-powered mobile surveillance is another possibility for this class of technology. A new device was recently demonstrated that plugs into a Tesla Model S or Model 3 car and turns its built-in cameras “into a system that spots, tracks, and stores license plates and faces over time,” journalist Andy Greenberg described. When the owner has parked the car, “it can track nearby faces to see which ones repeatedly appear.” The purpose of the device is to warn car owners against thieves and vandals. But as the device’s inventor Truman Kain acknowledges, “it turns your Tesla into an AI-powered surveillance station” and provides “another set of eyes, to help out and tell you it’s seen a license plate following you over multiple days, or even multiple turns of a single trip.”85
CONCLUSION

The spread of AI surveillance continues unabated. Its use by repressive regimes to engineer crackdowns against targeted populations has already sounded alarm bells. But even in countries with strong rule of law traditions, AI gives rise to troublesome ethical questions. Experts express concerns about facial recognition error rates and heightened false positives for minority populations. The public is increasingly aware about algorithmic bias in AI training datasets and their prejudicial impact on predictive policing algorithms and other analytic tools used by law enforcement. Even benign IOT applications—smart speakers, remote keyless entry locks, automotive intelligent dash displays—may open troubling pathways for surveillance. Pilot technologies that states are testing on their borders—such as iBorderCtrl’s affective recognition system—are expanding despite criticisms that they are based on faulty science and unsubstantiated research. The cumulative impact gives pause. Disquieting questions are surfacing regarding the accuracy, fairness, methodological consistency, and prejudicial impact of advanced surveillance technologies. Governments have an obligation to provide better answers and fuller transparency about how they will use these new intrusive tools.

The purpose of the index and working paper is to highlight emergent trends for a technology that is not well understood yet will increasingly shape modern life. The good news is that there is ample time to initiate a much-needed public debate about the proper balance between AI technology, government surveillance, and the privacy rights of citizens. But as these technologies become more embedded in governance and politics, the window for change will narrow.
ACKNOWLEDGMENTS

Special thanks to Luke Lamey, undergraduate at Georgetown University, for research assistance in compiling references for the AI Global Surveillance Index. Many thanks also go to Jon Bateman (Carnegie Endowment for International Peace), Thomas Carothers (Carnegie Endowment for International Peace), Adrian Shabhaz (Freedom House), Brian Wampler (Boise State University), and Nick Wright (Intelligent Biology, Georgetown University) for generously giving their time to read through prior drafts of this paper and to offer invaluable feedback and advice.

No comments: