8 August 2019

Is the Threat of ‘Fake Science’ Real?

By Alden Fletcher 

In the early 1980s, Soviet intelligence began Operation Infektion—a campaign to erode trust in the U.S. government by orchestrating a series of scientific papers and news articles arguing that the U.S. government created the HIV/AIDS virus. As part of the operation, Soviet intelligence services relied on retired biophysicists Lilli and Jakob Segal, who co-authored with university colleague a 47-page pamphlet attributing the origins of the disease to the U.S. government. The Segals’ report recounted numerous factually accurate aspects of the disease but veered away from reality by attributing the origins of HIV/AIDS to U.S. military experiments on prisoners at Fort Detrick, Maryland. Just two years after publication, the report had received coverage from news organizations in more than 80 countries and contributed to the persistent belief that the U.S. government manufactured HIV/AIDS.


Although the operation deceived large swaths of the public, it failed to convince the scientific community, even within the Soviet bloc. Had the Segals succeeded in passing off their research as legitimate science, the operation could have caused even more damage. It could have diverted further resources away from the public health response to HIV/AIDS and entangled more scientists in an acrimonious fight about the fictitious role of the U.S. government in the epidemic.

Operation Infektion has received renewed attention today as part of discussions around “fake news” and Russian election interference. But it is also worth considering, as an example of a specific subset of fake news within academia, what technologist and author Bruce Schneier described as “fake research.” According to Schneier, the increasing ease and speed with which one can publish findings online has contributed to the danger of research platforms hosting and disseminating fake research. Schneier notes that the publication of fake research could be part of a nation-state strategy seeking to gain an advantage over a competitor. Like other disinformation operations, the promulgation of fake research could seriously disrupt the scientific initiatives that constitute the building blocks of a sound security architecture. 

The suggestion is worth considering further. How could a country use disinformation to affect scientific research? At present, evidence does not suggest the existence of nation-state efforts to inject “fake science” into academic publishing. Although fake science could be used offensively like fake news, such an operation faces inherent constraints that limit its scope and effect. That being said, the scientific community still bears significant vulnerabilities, making it a potentially attractive target for disinformation.

The U.S. national security architecture explicitly acknowledges the importance of fundamental research for national security. As defined in a long-standing directive, fundamental research involves both basic and applied research “the results of which ordinarily are published and shared broadly within the scientific community,” and is often performed at colleges and universities. Recent national security strategies continue to recognize this link. According to this thinking, advances in scientific understanding create the foundations for new critical technologies, lead to defense-related applications and provide economic benefits. Countries with healthy and productive research initiatives gain a strategic edge in global competition. 

Recognizing the security dimension of basic research involves balancing a system that promotes “the free exchange of ideas” against the threat of a foreign actor exploiting the openness in the academic community. In August 2018, the National Institutes of Health sent a letter to many U.S. universities warning of foreign influence and espionage in biomedical, energy and defense-related research efforts. The U.S. House of Representatives is currently considering new measures to guard against such foreign meddling in U.S. academia. Despite these concerns, policymakers have yet to systematically consider the possibility of scientific disinformation operations that exploit academic openness—not by stealing information or conducting espionage but by supplying fake or misleading information.

Fake News and Fake Science

The term “disinformation” means false information that is intentionally created to harm some entity. Scholars have defined “fake news” as information that mimics news output in form but is produced without any of the processes, structures or ethics that seek to guarantee accurate and independent journalism. Fake news overlaps with disinformation and is either deliberately or misleadingly false. Similarly, scientific disinformation can be understood as information output that mimics science in form but where the research, findings or conclusions have been deliberately falsified in some manner. By this definition, the report produced as part of Operation Infektion stands in a sort of middle ground. Jakob Segal in particular believed in his “research,” but the report was the product of deliberate activity by East German intelligence, working with Soviet intelligence, to foment distrust in the United States.

Given the definitional similarities, one might imagine that a state-directed fake science operation could look a lot like Russia’s use of fake news in its recent disinformation campaigns. However, such an operation would have to work very differently from the 2016 election interference effort.

As described in a recent Oxford report, the Internet Research Agency’s campaign of disinformation sought to foment polarization and disenchantment through widespread dissemination of fake ads and stories. Between 2015 and 2017, these reached more than 30 million users over Facebook and Instagram. Furthermore, the “troll farm” targeted these ads and stories at particular communities. It specifically sought to convince African American voters to boycott the 2016 election and stoke confrontational behavior in conservative voters. 

A scientific disinformation campaign would have to operate much differently. First, the audience that could reasonably be expected to read a scientific publication is much smaller. Most scientific and research publications are read principally by academics and practitioners within the field and do not attain wide readership. Second, despite the proliferation of online publishing tools and predatory journals, publications in recognized and peer-reviewed journals still benefit from an added measure of credibility and acceptance in the field. Finally, the mere fact of publication does not necessarily mean that a paper has the capacity to deceive; many scholarly readers are likely to react skeptically to pieces that advertise extraordinary findings. After all, the readers themselves are generally steeped in the particular knowledge of the field. 

Thus, to be most effective, a scientific disinformation operation could not merely rely on online outlets but would need to deceive existing gatekeepers within the scientific community. Success would require precisely targeting a fake science operation at a particular research area and creating a sufficiently detailed fake study to pass off its “findings” as legitimate. Achieving widespread dissemination and acceptance within a field would depend more on publication in a widely read and recognized journal than on spreading the fake science through open-access fora. 

These constraints do not impose impossible barriers to a fake science operation. Indeed, activists have repeatedly demonstrated that the peer-review process can be deliberately deceived. But these differences limit the ability of such an operation to function as a clear analog to fake news by creating additional difficulties. While an operation could attempt to rely purely on open-access platforms, such as SSRN, to spread fake science, these open-access platforms lack the implicit credibility of a publication with a peer-review system, so the effect of an open-access operation would likely be more limited.

Of course, popular, nonacademic readers could be deceived by such publications, but then the dynamics of the operation would not be meaningfully different from fake news. The Segals’ report, for instance, convinced large swaths of the general public but had only modest effects in the scientific community. Many scientists from the Soviet bloc disagreed with the Segals conclusions. By 1988, as the Soviet Union began to grapple with HIV/AIDS among its own population, the Soviet academy of scientists disavowed any link between the government and the virus. 

The Workings of Fake Science Operations

Scientific fraud is nothing new. The blog Retraction Watch maintains a large databaseof papers retracted from scholarly publications for reasons ranging from innocent error to the deliberate faking of peer review. Examining the harm caused by prior instances of academic fraud suggests the damage that state-directed scientific disinformation could cause to legitimate research efforts:

(1) When undiscovered, scientific fraud can lead to fruitless and wasteful efforts to build on prior, fraudulent findings. Researchers might undertake costly projects designed to replicate reported data that later turns out to be entirely false. Indeed, some fraudulent papers have been cited extensively before being discovered. 

(2) When discovered, scientific fraud can contaminate an entire field of research. A 2014 National Bureau of Economic Research paper shows a pronounced decrease in citations to papers in fields afflicted with significant retractions. The authors hypothesize that these observed spillover effects reflect scientists’ fear of reputational harm in being associated with discovered fraud. 

(3) The presence of a scientific fraud in a field can impose front-end costs by causing publishers to implement more stringent verification systems, such as more expensive plagiarism-detection technologies.

(4) These verification systems could become overly inclusive. The discovery of fraud in one area of a field might increase institutional skepticism of related efforts. Thus, researchers in a field beset by fraud are likely to face greater difficulty in obtaining grants or being published due to concerns that their research is fraudulent as well. 

Viewed alone, any one of these burdens might seem trivial. However, it is worth remembering that publications in scholarly journals have an outsized impact on the careers of individual researchers and can help determine ability to obtain tenure and grants. The fear of being associated with fraud and other publishing difficulties could easily lead researchers to neglect otherwise viable and promising efforts. 

These problems bear important national security dimensions. The state of a country’s scientific research constitutes an important aspect of its long-term strategic outlook, and basic research can be highly important for national security. Additionally, academic research sits upstream of the technologies and trade secrets that countries strive to protect. Although governments possess their own controlled and classified research operations, many—including the U.S. Department of Defense—rely on the basic research performed by academic institutions. It is unclear whether existing research integrity offices would be prepared for a fake science threat.

A state-directed fake science initiative could flood a particular subfield with multiple, sophisticated fake studies. With sufficient information, a state-directed fake science operation could target precisely the fields where a rival state relies the most on academic and scholarly research. The harm caused would mirror that created by academic fraud but at a higher level of intensity. This could sow substantial confusion in the academic community and breed mistrust as scholars discovered the fraudulent nature of the studies without necessarily attributing them to state actors. Although the damage would not last indefinitely, the operation could slow research and significantly impair open scientific collaboration.

Is the Threat Real? 

At present, however, the available evidence does not suggest countries are trying to inject fake science into scholarly and academic publications. Consider the case of China. Over the past few years, news reports have linked the country to a significant output of scientific fraud. In one shocking incident, a scholarly journal retracted 107 articles from Chinese sources, largely due to faked peer review. At the time, the journal was published by Springer Nature, one of the largest players in the field. China tops the list of source countries of scientific misconduct in absolute terms, and one estimateattributes roughly half of all retractions due to faked peer review to Chinese sources.

But unlike China’s deliberate and directed state-based hacking and trade secrets theft, pervasive academic fraud appears to be an unintended consequence of China’s innovation policy. The country’s near-existential focus on technological innovation has exacerbated the emphasis on publications. Many Chinese universities award cash prizes to professors for journal publications, the amount varying in response to the prestige of the publication. A single publication in the leading scientific journal Nature, for example, nets an average $44,000. This mix of pressure and incentives has contributed to a sizable black market for faked peer review, data and empirical testing

Indeed, the country reportedly wastes a large portion of its research budget on fake science. In response, the government recently introduced new academic enforcementguidelines. These new regulations empower the Ministry of Science and Technology to investigate and rule on cases of academic misconduct, record and publish results of these cases in a new national database, and administer a blacklist of known predatory journals accepting fraudulent publications. 

These measures appear motivated by a desire to curb the fraudulent publication of scientific research, not part of a policy to disseminate and inject fake science into the academic community. Moreover, the country has also adopted rigorous data-control measures, which impose broad restrictions on the flow of scientific data out of China. 

In short, China’s experience helps to highlight the dangers of a fake science operation. Given the rate at which research findings spread through the scholarly community, a nation-state actor would have no guarantee that its fake science would not be picked up by its own researchers. A state-directed fake science initiative could occur only under limited circumstances. For instance, a state might deploy fake science as a spoiler tactic to disrupt research in a field where it has not made any significant investments. 

Moreover, the presence of fake science in the Chinese academic marketplace has not stopped the country from achieving impressive scientific milestones, such as its recent moon landing. Indeed, high retraction numbers likely reflect less a culture of fraud than the realities and pressures of a large-scale funding system for basic scientific research. After all, the United States follows closely behind China in terms of the most retractions, and prominent cases of fraud have involved researchers at U.S. institutions. A state-directed operation might cause greater harm than purely decentralized misconduct—yet the total effect of fake science might just be the imposition of some inconvenience on an isolated project. 

These realities suggest that the more a country has invested in its own scientific research efforts, the lower the likelihood that the country will try to “poison the well” by spreading fake science though the scholarly press. By contrast, a country with a smaller scientific research base might try such an operation.

The structure of academic and scholarly publishing means that any fake science operation faces challenges in expanding beyond the already-disreputable predatory journals and causing significant harm to the targeted research efforts. Individualsworking alone will likely continue to be the major sources of scientific fraud for the foreseeable future. Indeed, a concern for foreign intervention, however timely, should not obscure the many cases of research misconduct and fraud that originate entirely from domestic researchers. 

Regardless of foreign activity, increasing research integrity is a desirable outcome. Some scholars have advocated more active prosecution, and the Federal Trade Commission has already taken action against some of the more egregious predatory journals. These types of actions certainly bear security benefits, but the benefits are more diffuse, improving the state of the country’s general research efforts rather than countering a specific threat. Indeed, if the threat of state-directed scientific disinformation does materialize, a system that can quickly identify and isolate faked research should ultimately prove more resilient. Thinking ahead to the potential for fake science can better equip research institutions to respond to targeted disinformation while preserving an open scientific community.

No comments: