Tuesday, December 19, 2017

You Are Doing Emotional and Human Labor For Your Fave Social Media Sites

[Content note: Internet abuse/harassment]

Alexis Madrigal at The Atlantic ran a piece on a recent conference at UCLA on the topic of content moderation on the Internet.

I recommend reading it in its entirety, but here are a few important takeaways that correspond with observations on this topic I've made over the years.

First, content moderation is labor. Many people who run websites know this from experience, even if they don't consciously articulate it as such. It's also labor that is not (yet) able to be done well by automation because a lot of abuse can be very context-driven.

Secondly, that this labor is done by humans means that the people doing it are regularly exposed to content that is traumatic. Per Madrigal's piece, "reviewing violent, sexual, and disturbing content for a living takes a serious psychological toll on the people who do it." A consistent theme in pieces I've read on this topic is that those who do this work for many years often develop PTSD-like symptoms.

A former Myspace content moderator said, in Madrigal's piece:
“When I left Myspace, I didn’t shake hands for like three years because I figured out that people were disgusting. And I just could not touch people. Most normal people in the world are just fucking weirdos. I was disgusted by humanity when I left there. So many of my peers, same thing. We all left with horrible views of humanity.”
Catherine Buni and Soraya Chemaly writing at The Verge, and Adrian Chen at Wired, have also referenced this toll and the corresponding high rates of burnout among people who do content moderation for a living. For instance, via Chen's piece, a former YouTube moderator who was exposed videos that included animal torture, decapitations, and horrific traffic accidents, noted:
“Everybody hits the wall, generally between three and five months. You just think, ‘Holy shit, what am I spending my day doing? This is awful.’” 
If you read enough accounts of how content moderation happens (or doesn't) at large tech companies, and to my third point, you also begin to see a pattern: the creators of many user-generated content platform have historically put very little resources into content moderation before introducing new platforms to the public. This pattern corresponds, I believe, with the volumes that have already been written about the toxic culture of libertarian techbros who think their platforms will, or should be, entirely self-regulated among users.

Not helping matters, oftentimes the conversation gets simplified to an absurdly stupid level of, "On one side, we have people who believe in free speech. On the other, are people who can't handle tough conversations." This false dichotomous framing happens both within tech culture itself and mainstream media reporting on this topic. That much of this work is done by contractors, off-site and abroad, helps further invisibilize, at least to many commentators in the US, what this work actually entails.

Meanwhile, content moderators and users of these platforms are left floundering when, whooops, people end up using these platforms in ways that are far darker than anyone (ostensibly) anticipated. Madrigal's piece, for instance, describes a former Myspace content moderator recounting her experiences (emphasis added):
"Bowden described the early days of Myspace’s popularity when suddenly, the company was overwhelmed with inappropriate images, or at least images they thought might be inappropriate. It was hard to say what should be on the platform because there were no actual rules. Bowden helped create those rules and she held up a notebook to the crowd, which was where those guidelines were stored.

'I went flipping through it yesterday and there was a question of whether dental-floss-sized bikini straps really make you not nude. Is it okay if it is dental-floss-size or spaghetti strap? What exactly made you not nude? And what if it’s clear? We were coming up with these things on the fly in the middle of the night,' Bowden said. '[We were arguing] ‘Well, her butt is really bigger, so she shouldn’t be wearing that. So should we delete her but not the girl with the little butt?’ These were the decisions. It did feel like we were making it up as we were going along.'”
Consider more of the context in which a woman was developing moderation rules, on the fly, in a notebook: During its heyday in 2005, Myspace was the top social media site in the world and was bought for $580 million.

Which brings me to my final point.

When tech platforms don't put sufficient resources into content moderation, or if existing moderation rules and practices are arbitrary and ineffective, you - the users of these platforms - are doing the emotional and human labor of content moderation for these companies. For writers, content moderation and the psychological fallout of when it doesn't exist or is extremely flawed, becomes labor that is added to the work of writing when we publish or promote our work on platforms that are not well-moderated.

In a way it's ironic. Folks across the political spectrum can't stop talking about the abundance of purportedly-oversensitive "snowflakes" in society these days. And yet, I think it's reasonable to say that most Internet users are actually exposed to traumatic content somewhat regularly. We've also largely accepted exposure to this content as "normal," without having begun to really grapple with the effects of it as a society.

In a popular piece at Medium, James Bridle wrote recently of frightening videos posted on YouTube to scare children, ultimately saying:
"What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects."
Bridle concludes that "responsibility is impossible to assign."

But, I'll go there: Libertarian techbro culture has long posited that user generated-content platforms would simply work themselves out as self-regulated communities. Yet, we see over and over again that these communities, in practice, privilege bullies and/or those who can, somehow, inure themselves to the worst effects of the toxic cultures embedded within them.

In my own experience, people on the Internet have been telling me to kill myself for as long as I've been writing online - so, for more than 10 years. This message has come, most often, in the form of Twitter replies and comments at my blog. Sure, I can block a Twitter user. But usually, when I report such Tweets to Twitter, I receive no reply or follow-up regarding whether that user is banned.

From a bigger picture standpoint, I don't know a single person who has been running a website or blog, particularly if they're women, who hasn't had repeated run-ins with traumatic content or targeted harassment.

I think often about the voices we've lost over the years, and there have been many, because of the toxic cultures that thrive on platforms where the performance of content moderation labor falls on us, as users and writers.These harms are not something people in my generation (Gen X, if you're curious) really grew up learning how to deal with, or that, in my experience, many mental health professionals are even equipped to understand. I think many people have simply adapted to living with at least a low-grade state of anxiety about what they might encounter today on the Internet, particularly if they do a large portion of work on the Internet as part of their jobs.

I deal with a lot of it by telling myself, "It's not real. It's not real. It's not real. If this person really knew me, they wouldn't say that."  That is a coping mechanism, sure. But, the people I know who have been doing this work for a long time have developed a variety of informal tools when platforms fail to put adequate resources into moderating content: gallows humor, desensitization, creating intentional communities not centered around the usual sociopathic norms of Internet culture, and so forth.

I don't offer a clear answer here other than a plea to shift our thinking, as users of the Internet. Twitter, Facebook, whatever "free" social media sites you use - these aren't really free. In many cases, you are performing labor for them that they, for whatever myriad reasons, have absconded. The impact that might be having on you is very, very real.

No comments: