8 May 2019

U.S. Tech Needs Hard Lines on China

BY JUSTIN SHERMAN

In April, reports emerged that academics at Microsoft Research Asia in Beijing had collaborated on artificial intelligence research with individuals affiliated with China’s National University of Defense Technology—an institution under the direction of China’s top military body, the Central Military Commission. Revelations about this AI collaboration came on the heels of other headlines: In February, Microsoft employees protested a company contract with the U.S. Defense Department, and the following month, the chairman of the Joint Chiefs of Staff remarked that big tech collaborations abroad may or may not help foreign governments.

Companies incorporated in democratic countries will soon find themselves forced to draw lines on whether AI collaborations with entities in China and elsewhere are acceptable—especially for those firms that balk at working with the U.S. military.


Artificial intelligence research is largely open, meaning code, training data, and other related findings are frequently published online for anyone to access. This research also depends heavily on connections and interdependencies between companies and academic institutions in the United States and China—and many of these collaborations are beneficial for all parties involved. China’s largest retailer has a partnership with the Stanford AI Lab, for instance, to fund work in natural language processing, robotics, and a number of other research areas. Immense quantities of information flow across borders. But this makes it extremely difficult, if not altogether impossible, to control the diffusion of sensitive AI capabilities once they’re openly released.

Many artificial intelligence applications are also dual-use, with potential value in both military and civilian spaces. Facial recognition AI tools, for instance, may be used to identify customers in a store on the one hand and used to target drone strikes on the other. The dual-use nature of many AI applications makes it hard to differentiate a harmful AI application from one that is purely commercial or benign. This is why broad, sweeping export controls on artificial intelligence are a terrible idea. But working with someone on artificial intelligence research is different because of the deeper insight it provides into a technology’s design and implementation, and it’s something U.S.-incorporated companies can and should control.

Collaboration on AI research gives far better insight into developing technology and its implementation than just reading a paper or downloading code online. Put another way, working directly on AI research creates an implementation advantage.

Put another way, working directly on AI research creates an implementation advantage. Techniques that underpin modern deep learning, for instance, have existed for decades, yet it was the implementation—in this case, how to effectively train AI systems on the right hardware—that constrained AI’s progress. Implementation is similarly important to the U.S. Department of Defense: Google’s Project Maven contract with the Pentagon was to help implement AI tools that were already published openly. One could thus argue the collaboration didn’t add much value, and its termination last year didn’t change anything. Yet Defense Department officials still insist that in these and other cases it hurts their AI capabilities to not have the benefits of direct collaboration with those who developed, and therefore better understand, the technology.

If a company works with a given entity on AI research, the entity gains a completely different understanding than if it merely downloads a paper or code online. In the case of Microsoft Research Asia, to use a recent example, it doesn’t matter that the AI research would have been published online anyway. The process of working with Microsoft-affiliated researchers gives the Chinese National University of Defense Technology researchers an enhanced understanding of the AI applications—in ways that benefit them when it comes time to use, or further adapt and improve, the technologies.

Line-drawing in this area is not new. U.S. companies sometimes sell dual-use internet surveillance technology to despots, despite efforts to restrict this practice. Many companies are already leveling restrictions on whom they’re collaborating with and what they’re collaborating on with respect to AI research. As previously mentioned, several U.S. companies did so following some employees’ protests over contracts with the U.S. defense apparatus. These decisions on AI are being taken in addition to any regulatory frameworks that could emerge through the likes of a Global Network Initiative that could establish voluntary corporate guidelines on AI collaborations.

In the current political environment, these early attempts at collaboration restriction can in some ways be characterized as conducive to China-bashing, between the U.S.-China trade war and the U.S. government’s security concerns around Huawei’s 5G communications technology. A number of related—and bad—ideas have already arisen in this environment, such as the aforementioned AI export control proposals or discussion of restricting the travel of Chinese scientists to the United States.

That does not mean, however, that companies incorporated in democracies shouldn’t be concerned about their collaborations with certain specific Chinese entities. Look no further than Xinjiang, where the Chinese government has built out a massive surveillance state apparatus—including AI applications—in order to scale up its perpetration of human rights abuses against Uighur Muslims. The population is constantly monitored and evaluated through automated systems that decide who is put into the so-called reeducation camps. Chinese companies have also exported dual-use AI surveillance technologies to authoritarian or authoritarian-leaning governments, including the United Arab Emirates, Zimbabwe, and Malaysia.

Beijing currently uses AI to enforce digital authoritarianism domestically.

Artificial intelligence will contribute enormously to economic and military power for whoever develops and successfully implements the most powerful AI applications. Given China’s expansive vision for its power in the world, this is not just a potential concern for U.S. foreign policy and national security. There are also serious human rights concerns: Beijing currently uses AI to enforce digital authoritarianism domestically and may take action to encourage the spread of digital authoritarianism abroad.

Companies incorporated in democratic countries need to take a stand somewhere—and while it might not always be possible to know what research to publish or not publish, companies and researchers can and should draw lines when it comes to direct collaborations with foreign military entities, or organizations known to have ties with the Chinese defense industry or domestic security system. Even if Microsoft Research Asia’s higher-ups didn’t know about, approve, or sponsor the collaboration of its researchers with a Chinese defense-affiliated group, there should be mechanisms put in place to increase visibility of and scrutiny into these kinds of partnerships.

With China in particular, the challenge will be identifying these military and defense links, as, by one estimation, security-related AI firms account for the highest share of the top 100 AI companies in China. These include facial-recognition startups and companies that sell security and safety-monitoring platforms. It has been reported that several of these firms have assisted the government’s mass surveillance in Xinjiang. Chinese defense researchers may also obscure their true affiliations when seeking out partnerships, which may further complicate the process of identifying links with military and defense entities.

Many collaborations between American and Chinese entities are important to AI development and mutually benefit all parties involved in the collaboration—as well as those who use the research once published. But as repression in Xinjiang increases, and as Beijing further boosts AI-enabled digital authoritarianism, there is a need for the U.S. and broader business community to do more than simply be passive or neutral. They must make it clear to the Chinese government that the international community will not stand by on issues of human rights and privacy.

No comments: