4 May 2019

Online Information Operations Cross Platforms. Tech Companies' Responses Should Too.

By Jessica Brandt, Bradley Hanlon 

In March, Facebook took down more than 2,600 pages, groups and accounts engaged in sweeping coordinated information operations on its platforms—an important step in the platform’s effort to prevent malign actors from spreading content designed to polarize and mislead. However, when our team at the Alliance for Securing Democracy took a close look at the small number of blocked pages and groups about which Facebook and its partner, the Digital Forensic Research Lab (DRFLab), shared information, we found that related accounts on Twitter, YouTube, and Instagram continue to operate, spreading falsehoods. Despite the promises of better communication and reform, there is a concerning lack of coordination between Facebook and its peers. 

Online information operations have emerged as a key tactic in the toolkit employed by malign foreign actors to interfere in democracies. As Special Counsel Mueller’s reporthighlighted, malign foreign actors have used social media platforms to spread divisive content as part of a broader campaign to undermine democratic institutions. Our organization investigates these efforts and develops comprehensive solutions for government, private sector, and civil society to counter and deter foreign interference. After Facebook announced its takedown of inauthentic networks a few weeks ago, our research team took a close look at each of the accounts and pages that were publicly identified by Facebook and its partners.


In several cases, websites linked to the removed networks continue to maintain a social media presence on a variety of platforms. For example, one of the removed Facebook pages originating in Iran, “Human Rights Agency – ADH,” links to a corresponding website that describes the organization as a U.N.-affiliated human rights watchdog. (The website is still live, but is archived here.) While the Facebook link on that website redirects to a screen explaining that the page no longer exists, the Twitter link is connected to a live Twitter account (archived version). Using a similar process, we found several other websites that continue to operate across other major social media platforms. For example, another website, AFtruth—which provides news stories that parrot Iranian state propaganda—links directly to two of the removed Facebook pages, and according to DFRLab’s analysis, bears connections to a previous Iranian information operation. Yet, the website’s YouTube page is still active weeks after Facebook’s announcement (archived version).

Even more concerning is “Pars Today Hebrew,” a Facebook page the company highlighted in its announcement. Pars Today’s website—archived here—publishes pro-Iranian news across a range of languages. While Pars Today Hebrew’s Facebook and Twitter accounts have been suspended, its Instagram account continues to operate (archived version)—which is particularly strange, because Instagram is a subsidiary of Facebook. Pars Today continues to operate a network of other accounts in various languages across Twitter, YouTube, Instagram, and even Facebook (Archived versions: Twitter, YouTube, Instagram, Facebook).

These examples come from the mere handful of the thousands of removed pages, groups, and accounts about which we have any information at all. Without greater transparency, it is impossible to know how many other related accounts may continue to operate across the online information space.

It is unclear why Facebook—and other online platforms—removed some of the Iranian-linked accounts for engaging in coordinated inauthentic behavior, while leaving other affiliated parts of the network, like the other Pars Today pages and Pars Today Hebrew’s Instagram page, untouched. The seemingly incomplete takedowns may reflect a lack of coordination within Facebook itself and with other key platforms.

Poor coordination is not a new problem for the major online information platforms. In fact, this same problem considerably hindered efforts to combat disinformation in the lead-up to and aftermath of the 2016 election. 

The “Alice Donovan” episode offers a case in point. In early 2016, officers in Russia’s military intelligence agency (known as the GRU), created the fake “Alice Donovan” Facebook account as part of the Kremlin’s disinformation network. “Alice” claimed to be a freelance journalist interested in U.S. foreign policy, and several heavily plagiarized articles on various alternative news sites were published in her name. In June of that year, GRU officers used that account to establish the “DCLeaks” Facebook page, through which they disseminated documents and emails stolen from the Democratic National Committee and Clinton campaign. The FBI had long been suspicious of the account: It began monitoring Donovan at least as early as the spring of 2016, months before Russia used the account to release the stolen materials. Yet her Facebook account remained active until more than a year after the leak, when the New York Times approached the company with suspicions that the account was fake. Donovan’s Twitter account remained active for several more months until July 2018, two years after the leak, when the Department of Justice finally exposed Donovan as a GRU persona.

Better cross-platform coordination—and cross-sector communication between government and the private sector— is necessary to combat disinformation. Authoritarian-backed disinformation campaigns operate across the whole of the information ecosystem, involving coordinated activity on numerous websites and social media platforms. Recent responses have been mostly isolated to specific platforms. More effective collaboration among platform companies, as well as between those companies and the government, would speed the identification and dismantlement of malign networks and the countering of emerging threats. New technologies, including “deep fake” video, will make the next wave of disinformation increasingly difficult to thwart. Government and social media companies bring different assets to bear on that task. The intelligence community has capabilities that allow it to assess the intentions of various threat actors; social media companies have unique visibility into activity on their platforms.

At a minimum, social media companies should adopt internal coordination policies to ensure that when an account is taken down on one of their sites (i.e., Facebook), related accounts will be immediately removed from the others (i.e., Instagram). As platform companies diversify their portfolios, this should be standard practice. Recognized channels of communication between platform companies would also be useful. An industry-wide consortium of officers within platform companies’ trust and safety teams could be designated the first point of contact for takedown notices, responsible for verifying inauthentic behavior, and where necessary, taking swift action.

Companies should also work to establish coordinated standards for identifying and addressing disinformation and inauthentic behavior across their platforms. Differing terms of service may be in part to blame for the continued operation on other platforms of accounts linked to those Facebook took down. For example, while Facebook’s terms of service disqualify “coordinated inauthentic behavior” (networks of accounts working to mislead users), Twitter’s terms of service focus on the characteristics of specific accounts (for example, whether the account uses a stolen profile photo or bio). Those differences may hinder efforts to tackle information operations holistically. While companies are unlikely to agree on terms of service as a whole, they should work toward a unified understanding of the threat and basic agreement on when they should take action to counter it.

Since platforms often do not act until they are told to, the government may need to provide accountability. Government could also develop formalized channels for sharing threat intelligence among its entities tracking disinformation trends broadly, and companies monitoring traffic on their own platforms. This should be accomplished with sensitivity to the legitimate concerns of activists who fear increased government surveillance or abuse of power – not least in authoritarian countries. Companies should be careful not to share personally identifying information that could be used by repressive governments to monitor or silence critics.

Transparency is just as necessary as coordination. Facebook’s announcement describes networks of accounts originating in Iran, Russia, Macedonia and Kosovo. While it includes sample content from each of these networks, it does not provide information on the vast majority of pages, groups, and accounts that it removed. Investigations by the Facebook-partnered DFRLab provide helpful analysis of the Iran and Russia-linkednetworks, but only reveal a small portion of the relevant pages and groups. Were tech companies more proactive in sharing information about how their platforms have been manipulated by foreign actors, researchers could turn their attention to improving defenses against disinformation.

The challenge is urgent. The 2020 campaign season is underway, and disinformation about declared candidates is circulating. Information operations are vast and cross platforms. Our responses must too.

No comments: