5 min

How to tell the difference between a shouting match and violence

Twitter image logo
Written by
Emily Shaw
Published on
May 27, 2016

This past weekend, I thought a lot about the Matt Bruenig affair, and particularly about the idea of “attack mode on Twitter” which was described in a press release by Demos,a left-wing think tank, about the end of his employment.

I thought about the women I watch online — the ones I admire, identify with, and like to read — who describe their experiences with stalking and violence based on their writing. I thought about how women who write online get more negative and violent content thrown at them than men do. (It might be particularly bad for women who write about traditionally male-dominated subjects like sports and videogaming, which is having the effect of encouraging women to stop writing about certain topics in order to avoid abuse.)

I think about myself and how carefully I write online. I’m American, I have exceptional luxuries in how my freedom of speech is protected, and yet I can’t help feeling personally vulnerable. As an online writer, and particularly as a female online writer, I feel like I’m always one errant tweet or blogpost away from a campaign of rape and death threats.

For that I blame the fact that we don’t yet agree on the difference between an online shouting match and online violence.

The issue is right there in the term “attack Twitter.” The whole concept of using a publication format to “attack” gets awfully close to interpersonal violence all by itself.

Now, as with other kinds of specialized tastes enjoyed by consenting adults, I’m willing to believe that there are a number of men and women who like to get into online shouting matches. (You do you, people.) As the civility advocates argue, I do also believe there is a collective price to pay for the normalization of this kind of discourse because it changes the nature of who’s willing to engage. At the same time, I believe that we have some control over avoiding “shouting match” type environments. To create the kind of online experience I want, I do what most people do: I follow people who have the kinds of conversation I want to be part of and unfollow those who are unpleasant. When this approach is successful in creating the kind of environment you want, you know that any fights you have are consensual.

When this doesn’t work, though, and you can’t avoid the attack, then we can see that those interactions are really not consensual. At this point, it’s not just a shouting match. It’s violence.

When “attack Twitter” goes beyond consent, it truly — and intentionally — causes real mental and physical harm. We see certain regular things that show that a consensual online shouting match has escalated to real violence. When social media-based attacks contain physical threats or gendered or racialized terror, or persist for a duration of time and across platform (including into offline life), they are violence and they need to be taken very seriously. Any decent person will denounce attacks containing these features. Any decent person will end relationships with people who engage in them.

You would have to be deliberately ignorant at this point to not know that online attacks can result in serious harm. Social media attacks have caused a number of documented suicides. Threats of violence against individuals and events that originated in online “attack Twitter” campaigns have caused many real-world consequences.

As someone who wants to see an expansion of online political life, I’m really concerned that the places where people go to talk about politics — social media and comments sections — are the places where online harassment most frequently occurs. As someone who wants to make public participation more equitable, I am especially concerned by the ways that online violence particularly affects women and people of color. Online violence has specific kinds of consequences for them. In general, online abuse directed at women is more likely to be sexualized. Online abuse directed at women is also more likely to persist over time: the Pew Research Center found that over a quarter of young women have reported being stalked online. The Guardian newspaper studied the comments left on their articles and discovered that of the ten Guardian writers receiving the most abuse through online comments, eight of them were women. The other two were black men.

There’s a lot of change to make happen in order to make online spaces genuinely safe for people, for a variety of reasons. Understanding what constitutes online violence, and reinforcing this understanding in our networks, should be the easy part.

The following elements demonstrate that an online interaction is no longer consensual and has moved into violence:

  • Threats of physical violence. (“I will kill you”) This includes doxxing, which carries the implicit threat of real-life violence against the person whose details are published.
  • Gender-based or racialized violence. (e.g., references to rape or sexual violence, lynching, etc.) These are particularly vicious forms of harassment because they evoke long histories of violence used against members of the target’s demographic group. To make things worse, people targeted with these threats are more likely to see them as credible real-life threats than more general threats of violence, and so they’re more likely to have a silencing effect.
  • Persistence over time and across platforms, where a targeted person experiences threats or harassment over a sustained period and across different personal accounts. Persistent, cross-platform abuse can be accomplished by a single individual or by a coordinated group of individuals working, on a separate platform, to “take down” a targeted person. Women are much more likely to report being targets of long-term and cross-platform forms of harassment. This is possibly related to the fact that platforms where coordinated harassment campaigns are known to be carried out skew demographically young and male. For example, a subreddit (a variety of anonymous online forum) known as “KiA” polled itself and found that it had a 91.3/8.7 male-to-female membership ratio. This group was particularly associated with launching anonymous online and offline attacks on female journalists and video game designers in a now years-long campaign known as “GamerGate.”
  • For more perspective on online abuse, see the Women’s Media Center Speech Project’s Online Abuse 101.

I joined Open Heroines because I wanted to help make sure that the open government and open data field becomes broadly representative and equitable. This is also a hope that I, and many of my colleagues, have for all of our political spaces. Just like intimidation and harassment can ensure that people don’t participate in real-life politics, we can’t expect to have free and open participation when we turn a blind eye to online violence. Agreeing to fight it in our own online political networks is the first step.

Subscribe to newsletter

Subscribe to receive the latest blog posts to your inbox every week.

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Continue reading

Let's Catch Up!

Happy 8th Anniversary, Open Heroines!
Mor Rubinstein
January 17, 2024

Speak Freely, Speak Safely: Committing to Feminist Online Civic Space

Outcomes from OH/Pollicy Session at OGP Summit
October 4, 2023
6 min read