8 December 2017

Countering Russian Information Operations in the Age of Social Media


Russia's information warfare operations, aimed to weaken adversaries' social cohesion and political systems, are complex and adaptive, but Western governments can take steps to guard against them. A Russian flag and a 3-D model of the Facebook logo is seen through a cutout of the Twitter logo in this photo illustration taken in Zenica, Bosnia and Herzegovina, on May 22, 2015. Dado Ruvic/Reuters

As investigations into attempts to influence the 2016 U.S. presidential election continue, more aspects of Russia’s approach to information warfare are coming to light. A steady stream of new disclosures is revealing a complex blend of hacking, public disclosures of private emails, and use of bots, trolls, and targeted advertising on social media designed to interfere in political processes and heighten societal tensions.

Moscow’s hostile actions are driven by the belief that Russia is already in a state of conflict with the West, led by the United States, and that the internet is a domain for waging this conflict. From the earliest stages of the internet’s development, Russia has held a starkly different view from the West of its benefits and its potential. Russia’s national security establishment immediately saw connectivity as a threat and a potential weapon—and eventually as one that could help achieve regime change and deprive a country of its sovereignty—rather than as an enabler of economic development.

The organization of Russia’s information-warfare capabilities, which include cyber operators, media outlets, and false flag entities, is shrouded in secrecy. In the West, generally only the intelligence community has a clear picture of how Russian capabilities are directed. Barring the sudden appearance of a Russian counterpart to Edward Snowden, the only view into Russia’s information toolbox is provided by cybersecurity companies and criminal prosecutions. The picture is further muddied because the Russian government keeps many of its cyberwarfare actors at arm’s length by employing contractors and former criminals through middlemen, giving Moscow a degree of deniability if caught.

Nevertheless, both Western governments and private industry can take steps to mitigate Russian influence operations. Western governments should swiftly and decisively denounce Russian information activities as soon as they are identified, and their counterintelligence agencies should identify quantitative means to measure the effectiveness of Russia’s methods. Social media companies should more aggressively police their platforms for malicious state-sponsored content, and they should work with news organizations to promote verified and fact-checked content on their platforms.

Background

Russia’s long-standing, overall foreign policy objective is to weaken adversaries, particularly countries on its periphery, those in NATO, and the United States, by any means available, and its information warfare targets social cohesion and political systems toward this aim. During the twentieth century, the Soviet Union exploited freedom of expression in the West by planting and spreading fake news stories. In the last decade, the rise of social media has made this task vastly simpler. And at least since 2016, Moscow has also exploited the sophisticated advertising networks used by legitimate companies and political campaigns to precisely target audiences for disinformation.

Russia’s long-standing, overall foreign policy objective is to weaken adversaries by any means available.

Russia worked toward this objective during the 2016 U.S. election campaign, when Russian agents combined technical and psychological measures to sway U.S. voters away from Hillary Clinton and toward Donald J. Trump. Hackers obtained documents and selectively released them to embarrass the Clinton campaign, while their carefully targeted social media operations denigrated Clinton and boosted the Trump agenda.

Russia attempted similar campaigns during the French election in May 2017, but a forewarned French government and media meant that the activities met with only limited technical success and had no significant bearing on the election result. French law prohibits candidates from campaigning and the media from quoting candidates or campaign officials within forty-four hours of a presidential vote. That prevented the French media from disclosing the contents of emails leaked from Emmanuel Macron’s campaign in the hours before the vote. French media users also tend to get their news from traditional outlets rather than social media, which further limited the leak’s effectiveness.

It is harder to discern whether or how Russia meddled in the German elections in September 2017. One possible explanation is that after the French experience, Russia chose not to interfere in Germany; another is that Russia did attempt to interfere, but the techniques used were more subtle and are not yet fully understood. (Many of the implements used in the U.S. election are only becoming widely known a year after the event.) But even if Russia’s potential election manipulation is unsuccessful or entirely absent, just its suggestion is enough to cause uncertainty and doubt about the democratic process and hence meet Russia’s objectives.

Russian attempts to sow discord are not confined to elections. Attempts to meddle in U.S. internal affairs have continued since the election. Most recently, Russian internet trolls published divisive messages on social media in response to the controversy over NFL players’ kneeling during the national anthem. The Alliance for Securing Democracy, a research group that tracks more than six hundred Russian-backed accounts, observedRussian trolls promoting hashtags aimed at fueling the debate.

The digital processing of personal data, including browsing history and consumer spending, enables anyone to precisely target selected groups and individuals by geographic location and socioeconomic status. In particular, malicious actors are able to show contradictory messages to different groups of users, categorized by political, ethnic, religious, or demographic characteristics, in order to play on existing tensions within target societies. Information is slowly emerging about the extent to which this method was employed by Russian-linked entities during and after the U.S. presidential election, but its overall effect remains unclear.

Malicious actors are able to show contradictory messages to different groups of users to play on existing tensions.

Cyber-enabled disinformation can have a measurable objective and effect. One method is for hackers to insert false reports in genuine media outlets. For example, in May 2017, a malicious actor suspected to be from Russia compromised the website of a Qatari state media outlet to attribute to the emir of Qatar remarks praising Iran. This triggered a diplomatic row between Qatar and its neighbors.

Challenges

The social media ecosystem provides an ideal environment for hostile information campaigns. The more incendiary the information is, the more likely it is to go viral. Many users have lost trust in established news outlets, and they tend to consume information that affirms, rather than informs, their views.

Russia has no need to create new divisions in target societies when it can exploit already-existing fault lines. For example, Russian-backed efforts amplified the controversy about the NFL and the national anthem by promoting the most divisive and extreme voices in the debate. Now, as during the Cold War, the strongest defense against malign Russian influence is to identify the divisions and social ills that provide Russia with leverage. Remedies to these problems are complex and require significant resources and time. However, the Trump administration has shown little interest in confronting Russian cyber operations, and the president himself is actively engaged in the divisive use of social media.

Western states also depend on multinational corporations to constrain information warfare operations. Immediately after the election, Google, Facebook, Twitter, and other tech companies denied that their services could have been manipulated by disinformation campaigns. More recently, though, they have acknowledged the scope of Russian operations and have been working with third parties to flag fake news, and they have rolled out technological fixes to counter disinformation and provided limited data on the source of advertisement purchases. Critics in the media and Congress have argued that these companies’ responses have been “frankly inadequate on almost every level” and are unlikely to succeed.

Although the Trump administration seems unlikely to pursue action against Russian information operations, there are steps the U.S. Congress and other governments should consider.

Awareness of the challenge of Russian information warfare is the most potent defense against it. Western nations were initially slow to respond to the multifaceted nature of Russia’s developing online capabilities. The focus in the West was almost exclusively on countering technical threats delivered through cyberspace, such as economic crime, espionage, and attacks on critical infrastructure. This approach neglected the additional capabilities that Russia was building up in other areas of information warfare.

Awareness of the challenge of Russian information warfare is the most potent defense against it.

More recently in Europe, however, increasing awareness of the threat has enabled society, media, and governments to put appropriate defenses in place. In Germany, public awareness and interest in hostile information operations had been aroused by the “Lisa” case, in which Russia attempted to stoke anti-immigrant sentiment. The media blackout in France helped blunt the effect of Russia’s interference in the presidential election; but Macron’s campaign was also aware of Russia’s attempts to influence the outcome and took countermeasures. Leaders in other Western nations should be open and outspoken about the nature of the challenge, as doing so has been shown to be highly effective in raising public awareness and decreasing potential targets’ susceptibility to information operations.

Another essential step to countering information warfare is for governments targeted by Russian influence operations to develop a metric of damage that acknowledges a range of objectives, including influencing elections. Countries including the United States, Germany, and the United Kingdom have made little visible effort to quantify the success and effectiveness of Russia’s subversion and disinformation campaigns. This raises the risk that targeted governments could misdirect resources and countermeasures against ineffective threats that could reasonably be simply monitored, while overlooking other threats that could cause actual harm.

Once harmful information operations are identified, targeted states should quickly denounce them, both to minimize their effectiveness and to deter other actors that might want to conduct Russian-style operations. Policymakers should also warn other states tempted to combine cyberattacks with social media manipulation that exposure and response will be much more rapid and effective than they were in the 2016 election.

On computers, antivirus software monitors the integrity of critical systems and processes, assessing whether they have been affected by malicious data introduced from outside. Governments should develop an analogous system of identifying sources of misinformation and mapping how they influence online discourse and public opinion. This would allow them to properly assess any effect of Russian subversion on public debate. While the government agency that would conduct this monitoring would vary among countries, in each case the security and counterintelligence agencies responsible for protecting the security and integrity of state systems would need to provide support. The costs involved in implementing such measures would be a disincentive for any Western government, but they should be weighed against the costs of the loss of political legitimacy, integrity, or, indeed, sovereignty.

Social media platforms such as Twitter and Facebook have an important role to play in mitigating the effects of Russian messaging, but their primary objective is generating profits, not defending Western political systems. Attempts to introduce legislation or regulations to restrict online speech, even if they were targeted at Russian disinformation and trolls, could mirror Russian constraints on free expression and could be interpreted as running counter to the values Western societies seek to defend. Nevertheless, tech platforms have an interest in taking firm steps to prevent, for example, the hijacking of profiles of legitimate organizations and individuals for the purpose of disinformation. They also have an interest in cooperating with Western intelligence agencies, as this could provide them with greater understanding of how their systems are abused to systematically deceive their users, as well as of software bugs and other technical vulnerabilities in their products.

To address the specific problem of disinformation, social media companies should continue partnering with journalists and fact-checkers to build trust, even though this is only effective for media-literate users who take the time and effort to assess the legitimacy of sources. The extent that governments can guide such efforts will vary among countries, depending on their constitutional systems and media cultures. In the United States, for instance, the First Amendment greatly limits what the U.S. government can do to vet online media. But where government action is permissible, national media bodies, such as the United Kingdom’s Independent Press Standards Organization and the Office of Communications, should implement proposals for an open review and verification system for online media with the aim of establishing a gold standard for fact-checking and objectivity. Whichever approach countries choose to take, they should recognize that any anti-disinformation system needs protection against the same kind of gaming and abuse as any other open forum to which Russia has access.

Social media companies should continue partnering with journalists and fact-checkers to build trust.

To combat the particular challenge of how human psychology is exploited by social media disinformation, governments’ responses should be as interesting as the fake news they are countering. Simple explanations that a particular piece of news is false are not sufficient to engage target audiences. Countermeasures should focus not on fact-checking but on the deceit—emphasizing that people were conned—and, like the original disinformation, should appeal to readers’ emotions rather than their rationality, in order to be effective.

Russian information operations pose a difficult but not insurmountable challenge to targeted governments. But countermeasures should be flexible and adaptable: any success in countering Moscow’s operations will invariably cause the Kremlin to deploy new capabilities. If defenders are not prepared to be alert and agile, they will once more be taken by surprise.

No comments: