8 May 2019

THE EXISTENTIAL CRISIS PLAGUING ONLINE EXTREMISM RESEARCHERS

PARIS MARTINEAU

A COUPLE OF hours after the Christchurch massacre, I was on the phone with Whitney Phillips, a Syracuse professor whose research focuses on online extremists and media manipulators. Toward the end of the call, our conversation took an unexpected turn.

Phillips said she was exhausted and distressed, and that she felt overwhelmed by the nature of her work. She described a “soul sucking” feeling stemming in part from an ethical conundrum tied to researching the ills of online extremism and amplification.

In a connected, searchable world, it’s hard to share information about extremists and their tactics without also sharing their toxic views. Too often, actions intended to stem the spread of false and dangerous ideologies only make things worse.

Other researchers in the field describe similar experiences. Feelings of helplessness and symptoms associated with post-traumatic stress disorder—like anxiety, guilt, and anhedonia—are on the rise, they said, as warnings go unheeded and their hopes for constructive change are dashed time and time again.

“We are in a time where a lot of things feel futile,” says Alice Marwick, a media and technology researcher and professor at the University of North Carolina Chapel Hill. “We're up against a set of bad things that just keep getting worse.” Marwick co-authored Data & Society’s 2017 flagship report, Media Manipulation and Disinformation Online with researcher Rebecca Lewis.

In a way, their angst reflects that of the tech world at large. Many researchers in the field cut their teeth as techno-optimists, studying the positive aspects of the internet—like bringing people together to enhance creativity or further democratic protest, á la the Arab Spring—says Marwick. But it didn’t last.

The past decade has been an exercise in dystopian comeuppance to the utopian discourse of the '90s and ‘00s. Consider Gamergate, the Internet Research Agency, fake news, the internet-fueled rise of the so-called alt-right, Pizzagate, QAnon, Elsagate and the ongoing horrors of kids YouTube, Facebook’s role in fanning the flames of genocide, Cambridge Analytica, and so much more.

“In many ways, I think it [the malaise] is a bit about us being let down by something that many of us really truly believed in,” says Marwick. Even those who were more realistic about tech—and foresaw its misuse—are stunned by the extent of the problem, she says. “You have to come to terms with the fact that not only were you wrong, but even the bad consequences that many of us did foretell were nowhere near as bad as the actual consequences that either happened or are going to happen.”

Worst of all, there don’t appear to be any solutions. The spread of disinformation and rise of online extremism stem from a complex mix of many factors. And the most common suggestions seem to underestimate the scope of the problem, researchers said.

Some actions—like adding content moderators on platforms like Facebook, developing more advanced auto-filtering systems to take down problematic posts, or deploying fact-checking programs to flag and derank disinformation—rely too much on platforms’ ability to police themselves, some researchers say. “It's so easy to start fetishizing the technical aspects of these problems, but these are first and foremost social issues” that are too complex to be solved by tweaking an algorithm, says Lewis.

Other approaches, like media literacy programs, may be ineffective, and place too much responsibility on users. Both sets of tactics ignore messier, less quantifiable parts of the problem, like the polarized digital economy where success is predicated on attracting the most eyeballs, how rejecting “mainstream” truths has become a form of social identity, or the challenges of determining the impact of disinformation.

“It's not that one of our systems is broken; it's not even that all of our systems are broken,” says Phillips. “It's that all of our systems are working ... toward the spread of polluted information and the undermining of democratic participation.”

“We are in a time where a lot of things feel futile.”

ALICE MARWICK, UNIVERSITY OF NORTH CAROLINA

The internet is an unreliable narrator, and any attempt to interpret online actions with the same sincerity afforded to those in the real world is fraught. Some seemingly influential accounts, like @thebradfordfile—which has over 125,000 followers on Twitter, and has been cited by outlets like The Washington Post and Salon as an example of far-right thinking—are shams, and only appear to wield influence thanks to paid engagement schemes, tweet-boosting DM rooms, and other means of artificial amplification. The metrics by which we gauge an idea or individual’s worth online are easily manipulable. Likes, retweets, views, followers, comments, and the like can all be bought.

That vulnerability brings many of our assumptions about life online into question. Sockpuppet networks—fake accounts created to make an idea or view seem more popular than it actually is—appear on Reddit, Facebook, and elsewhere. In the case of Pizzagate—a particularly toxic conspiracy theory about a DC restaurant which culminated in a gun-wielding adherent opening fire in the establishment in 2016—a legion of automated Twitter accounts helped the conspiracy theory gain traction by making it appear to have more real-world supporters than it actually did.

In 2017, the Kremlin’s Internet Research Agency used 133 fake Instagram accounts to spread disinformation, gaining more than 183 million likes and 4 million comments. How many of those heart buttons were tapped by genuinely unassuming followers, versus paid fake human followers or an automated engagement farm? Likewise, are the conspiracy theories and dehumanizing beliefs in spaces like r/The_Donald, r/Conspiracy, or 4chan’s /pol/, genuine views expressed by real people? Does the fact that the Internet Research Agency and others have masqueraded as people to stoke tensions draw that into question? Or does the fact that we still don’t know whether those fake posts had any real impact draw that question itself into question?

It’s enough to make your head spin. In most cases, there’s no way to tell. Media manipulators are acutely aware of the performative nature of posting in a public space and use that to their advantage. Extremists misrepresenting themselves or their views to reach a wider audience isn’t exactly new, but the power of the internet and the rise of social media has exacerbated the problem.

“The problem with not knowing if something is serious or satirical is that [it] renders any intervention efforts moot out of the gate,” says Phillips. “That's what makes the work feel sometimes so futile, because we don't even know exactly what we're up against and we don't even know what we would do to try to fix it.”

The infectious toxicity of the subject matter further complicates matters. To warn others of some absurd polarizing conspiracy theory, a particularly egregious new sect of extremists, or a bout of disinformation—just to debunk, or call attention to the depravity of such views—can be a paradox. Last summer, after adherents of a toxic online far-right conspiracy theory known as QAnon were photographed at a Trump rally, many media outlets published explainers, debunk guides, listicles, and op-eds about disinformation-ridden conspiracy, catapulting it into the mainstream consciousness. Google searches for terms associated with the conspiracy theory skyrocketed, and online communities for adherents to those ideas swelled in size.

Intentions aside, the result of giving oxygen to disinformation is the same: The toxic core idea reaches a larger audience, and the impact of that information is out of your hands. Labeling extremist content or disinformation as “fake news” doesn’t neutralize its ability to radicalize. These sorts of thoughts and ideas are sticky. It’s like that (oft-copied) thought experiment posed by Dostoevsky in 1863: Tell yourself not to think of a polar bear, and it’ll inevitably come to mind every minute.

“Just by calling attention [to the fact that] a narrative frame is being established means that it becomes more entrenched,” says Phillips. “And in a digital media environment it becomes more searchable. It becomes literally Google search indexed alongside whatever particular story [is written about it].”

After an MIT researcher was lauded for her role in capturing the first image of a black hole, conspiracy theorists and extremists began spreading disinformation about her through YouTube videos. A series of articles and social media posts decried the fact that a YouTube search of the researcher’s name primarily surfaced conspiracy theories; then, some of those posts went viral, further tying her name to those false claims in the eyes of search engines and online indexes.

Similarly, the manifesto published by the alleged shooter in the Christchurch massacre was riddled with extremist dog whistles—terms or ideas that when plopped into Google or YouTube, would lead a searcher down a rabbit hole of radicalizing disinformation. A CJR analysis found that a quarter of the most-shared stories about the shooting mentioned these ideas, giving “information that could lea[d] readers to online discussion of the shooter’s ideology.”

"We don't even know exactly what we're up against and we don't even know what we would do to try to fix it.”

“It's impossible to raise attention to these issues without being subject to the same exact system, and that fundamentally hinders our efforts,” says Lewis. “Because the people that end up getting the most attention talking about these issues are the ones that also, to a certain extent, are exploiting the attention economy.”

The cycle of amplification is seemingly unavoidable, posing quandaries for those who study it. “What do you do with that knowledge once you realize that any single thing that [you] do or don't do is going to be part of how the story unfolds? That's a huge cognitive burden,” says Phillips.

The very nature of the work is itself a cognitive burden. Online extremism and media manipulation researchers spend their days sifting through hate-speech-ridden Reddit threads, dehumanizing YouTube videos, and toxic chat rooms where death threats and active harassment campaigns are par for the course. The deluge of hate and extremist content takes a toll on their mental health and leaves some with PTSD-like symptoms, much like those experienced by content moderators at Facebook, they say.

“When you are researching things that have a real emotional impact on you ... you have to feel that emotion, because, if you get inert to the racism or the misogyny or the hatred, you're no longer doing your job,” says Marwick. “You're seeing this deeply racist garbage all day long. You can't pretend it is not happening. And I think that has an effect on you.”

Unlike the content reviewers of social media giants, online extremism and media manipulation researchers aren’t protected by a veil of anonymity. Their work is public, and in many cases their contact info is displayed prominently on university websites and scholastic profiles, opening them to harassment. “I've been doxxed several times; I've gotten some very nasty comments, but I've never been the center of a total shitstorm and I feel like it's just a matter of time,” says Marwick.

Part of the problem, Phillips says, is that most users don’t think about the ramifications of every retweet, or Facebook post, or upvote. Without a sense of communal impact or personal responsibility, it’s increasingly difficult to expect a shift towards less toxic behavior and amplification.

“We need to think more holistically and more humanely about how we get folks to think about how they fit amongst other people,” Phillips says. “It is sort of a Copernican revolution—I am not the center of Facebook; there are other people—and it seems small, [but] … most people can understand that, if you frame it to them in the right way. And that could change how you interact with the people around you in your world more broadly.”

No comments: