7 November 2016

Debunking Lynch Mobs: Ethical Approach To Online Harassment And Free Speech – Analysis

NOVEMBER 4, 2016

In 2013, Justine Sacco ended up sabotaging her own career with a tweet just before boarding a plane to South Africa: “Going to Africa, I hope I don’t catch AIDS. It’s a joke. I’m white.” Sacco was then the global head of communications for the digital media conglomerate, InterActiveCorp (IAC). She had some 200 followers on Twitter. That tweet, intended to be sarcastic, sparked what would be called, “an ideological crusade.” Twitter users contacted Sacco’s boss, who in turn tweeted: “This is an intolerable and offensive comment. The employee is under question, we cannot contact her until she gets off the plane.” This, for many, was a sign that their complaints and criticisms “paid off.” The anger quickly turned into euphoria: “I’m dying to see Justine Sacco get down that plane”; “Dumb bitch, we’ll see how you get fired live.”

Since Sacco worked for a private company and in the area of communications, her boss had every right to fire her. Perhaps Sacco’s tweet was a mistake but she should have known about the scope of these blunders within the competence of her work. Justine Sacco was not fired just because a “lynch mob” on the Internet asked for her head. Rather, she was fired because a mistake she made at work – while her job was managing communications – triggered a virtual version of a “lynch mob”.

In social media, language co-exists in the paradox of having an oral intent but a written format. What we say online is thought with immediacy and somehow, we still expect our words to be forgotten in the same way any reckless joke we say to our friends would be. But our words on social media are of a written nature and a message can be on the internet for eternity; since dissemination over the internet has a global reach, our words are very susceptible to being taken out of context. This has ethical and moral implications and requires changes in our behaviour. It is not the same for Justine Sacco to say her tweet as a joke to a friend (who perhaps will quickly forget it), than to say it in writing, in a social network, where it can be read by people who do not know her enough to understand her sense of humour and intentions. The same tweet would have different weight if it were written in a book or in an official statement letter; in these cases it would be much more serious than something said casually on social media. Not everyone is obliged to such high degrees of political correctness, but everyone with a public persona – journalists, communicators, politicians, public figures – should have, if not considerable care, at least awareness of how technology amplifies the impact of their words (and even more so of their mistakes).


This awareness of speech is not the same as self-censorship. A lynch mob on social media is undoubtedly better than one in the real world, and definitely preferable to silence because, at the end, censorship is even more violent than verbal violence. The effect of words on the Internet is different from the effects of a fist fight, or pitchforks and torches. After all, civilisation began when humans went from shooting each other stones to shooting insults. But insults on the Internet can be amplified, and there is no denying the hurt that can be caused by words. So what happens when the cyberbullying has effects beyond smear or dismissal? Can you blame online bullying for generating irreparable damage? If so, how can we regulate it?

For “angry mobs” on social media there is often no plan, no conspiracy and no leader. They are not necessarily right or fair, and we cannot be sure that they are as big as they feel. A lot of noise can be made on the internet by just few influencers. The vast majority of online lynch mobs are spontaneous and emotional, governed by the same rules Gustave Le Bon (1895) explored in the psychology of the masses.[1] However, no matter how uncontrollable the “lynch mob” turns out, its effects are directly related to the vulnerability of its “victims.” As in three-dimensional life, mass demonstrations are highly powerful and meaningful, especially when they occur peacefully and collectively for a cause that is considered “fair.” But all crowds are susceptible to irrational and violent behaviour. The control of these situations often depends on individuals developing the sensitivity to resist their own cruelty.

In April 2014, the programmer Rachel Bryk, 23 years old and famous for her contributions to the development of the emulator Dolphin, killed herself. Bryk was also a prominent figure within the transgender community and amongst applications developers. She had been the subject of repeated and constant attacks of transphobia and online bullying which triggered a bout of depression that eventually led her to suicide.

There are radical differences between bullying someone because of their sexual orientation, and bullying a professional in communications for not foreseeing the effect of a clearly discriminatory comment on the Internet. For Bryk, Internet was a place where she could elaborate her own identity but also a channel through which she could be attacked. A situation of vulnerability, coupled with attacks on the Internet, encouraged her suicide. It is not exactly a hate crime, although the similarities are enormous.

In Bryk’s case, harassment (whether the aggressor knows it or not) goes far beyond a “death threat on Twitter.” The insults may have the same intention as an obscene scribble in the bathroom door, but their impact is much stronger. In circumstances like Bryk’s, discourse takes a performative role. According to John L. Austin (1962), the concept of performative language is when words, rather than describing an action, perform it.[2] For example, verbs like ‘swear’, ‘promise’, ‘declare’, ‘gamble’, ‘baptise’ and ‘marry’, actually have an effect on reality when they are enunciated.

Words are powerful; they build a symbolic field that affects the way we understand the world and our emotions. In cases of online harassment, ceaseless messages come through smartphones and personal computers, two of the most intimate devices a person can have in modern times; there are many who literally take them to bed every night. Imagine having the violence of bullying so close. However, a hurtful word, persistent as it may be, does not oblige Sacco’s dismissal nor drives a person to suicide. A person without a history of discrimination and living in a stable environment with a strong support system may not be as vulnerable to bullying as another one in more vulnerable conditions. The effects of online lynch mobs are psychological and should not be underestimated; it means reactions are as complex as human emotions and they too depend on context and environment. To withstand online bullying, all of these social vulnerabilities must be attended to and support systems must be strengthened. To counter the emotional effects of online harassment, people need plenty of support. A policy of social inclusion of minority groups (both online and offline) will be more effective in reducing the harmful effects of online bullying than the “preventive” self-censorship.

In 2014, the scientist Matt Taylor managed to land a spacecraft on a comet. When he spoke to the media about his achievement, he wore a shirt illustrated with a blonde woman wearing a corset and holding a gun. Some feminists said Taylor’s shirt design was ‘sexist’; opinion pieces came out, analysing what his shirt meant. Taylor himself then came on TV to apologise while crying, which led people to talk about the “evil horde of feminists” who had “lynched” and “censored” Taylor. However, online bullying against Taylor had in no way the effects (or intentions) that online harassment had in the cases of Sacco and Bryk. Several people criticised Taylor’s shirt on social networks, a few opinion journalists joined the criticism, the man apologised on television and spoke again of his scientific achievements. No one complained nor censored or fired him. Criticism, in its most flamboyant form, cannot be equated with bullying, online harassment or censorship. Being embarrassed on the Internet, like Taylor, is far from being a victim of lynching. Feelings of shame, according to David Hume (1739), are appropriate and useful to regulate our ethical behaviour and moral emotions.[3] In Taylor’s case, there was no job loss or permanent psychological damage. It is not that people have different sensitivities, rather, there are clear lines that distinguish criticism from online harassment. However, often on the Internet, legitimate, good, bad or exaggerated criticisms are indistinctly called lynching.

Interactions on social networks show that, in fact, the distinction between good and evil is not based solely on rational deliberation, and that moral judgments are not absolute or universal. People are not motivated solely by reason and logic; moral feelings are an important drive for our actions. The Internet is a privileged space to observe social regulation. If only David Hume had been alive to see it.

Hume said that morality is essentially based on feelings called “moral sentiments”, positive feelings associated with happiness of mankind and resentment of its misery. They motivate what we call “virtuous actions” that awaken “moral sentiments” and this leads to social regulation. For Hume, sympathy represents the tendency to get involved with other people’s emotions; this allows subjects to relate with each other. In many cases in social media, a Like, Fav, or Retweet has to do with a simple, perishable feeling of immediate sympathy.

This sympathy is one of the things that motivate people to act collectively as a group or even a mob on the Internet. In fact, some say that the mobs that “lynched” Sacco were well-intentioned at first, aiming to “defend rights.” But whose rights? Maybe that was exaggerated and people simply defend political correctness because it easily provokes feelings of virtue and belonging. With each “Like” or “Fav” on social media, we build shared values, a symbolic field of what is considered “good” and what is regarded as “bad.” To wear a shirt with a so-called sexist illustration and to make a racist tweet did not always awaken collective indignation. We have spent years building a symbolic field of language where these actions are rejected. In the exercise of public debate, we construct symbols that change how we perceive actions and the moral emotions that these actions stir.

Undoubtedly, the Internet is a great tool for participation in the global public debate. Social rejection is needed to regulate our behaviour, especially because legal punishment or criminal law may lead to censorship or other restrictions of the right to freedom of expression. When someone makes a racist or homophobic comment, or when he or she attacks or discriminates against any group, censorship is the least desirable solution; even the most absurd or prejudiced arguments should be said out loud in order to be debated.

“Lynch mobs” on the Internet exist and have real effects. But the only way to effectively regulate them is social regulation. Criminal or legal punishment has dire consequences; it would be crazy to send all internet trolls to jail, or to prosecute online harassment groups, especially since the term “lynch mob” is usually inaccurate, and it is also often used to stigmatise minority groups. Emotions cannot be dimmed or penalised. Censoring offensive speeches goes against the right to freedom of expression; we must remember that sometimes insults are legitimate social complaints. Social media is a space for scrutiny of public figures and this is unlikely to change. It is also a natural area for debating public opinion, and it should be assumed that everything said in social networks is some sort of opinion unless explicitly stated otherwise. The right to freedom of expression implies that each person must be held responsible for what he or she says on and off the Internet. In addition, in most countries there are extensive laws against harassment, threats, extortion, slander and libel, which can be used to handle cases of “cyberbullying” without inventing “new laws” for the digital realm.

In the end, each person has to go through an inner ethical negotiation between being respectful and empathetic, or aggressive and confrontational. Both positions may be valid depending on the circumstances, and both are protected by the right to freedom of speech. Sometimes, one speech strategy is more effective than the other. Of course, nothing exempts us of being aware of context in which we say things, of the privileges that we have and the potential impact of our amplified words. The verbal violence users experience online does not emanate from networks or computers; hatred and brutality are human emotions that can only be regulated with other human emotions like empathy and compassion. Maybe it is as simple as being mindful of the specific circumstances of the person we are engaging online and being aware of our desires to exert control over others. Communication should not be a violent act of conquest.

This article originally appeared in the third volume of Digital Debates: The CyFy Journal

[1] Le Bon, Gustave. [1895] 2002. The Crowd: A Study of the Popular Mind. Mineola, New York: Dover Publications.

[2] Austin, John L. 1962. How to Do Things with Words. Oxford; Clarendon Press.

[3] Hume, David. [1739] 2003. A Treatise of Human Nature. Mineola, New York: Dover Publications.

ORF was established on 5 September 1990 as a private, not for profit, ’think tank’ to influence public policy formulation. The Foundation brought together, for the first time, leading Indian economists and policymakers to present An Agenda for Economic Reforms in India. The idea was to help develop a consensus in favour of economic reforms.

No comments: