Pando

The invisible labor of content moderation

By Gaby Goldberg , written on September 16, 2020

From The Culture Desk



Effectively moderating a social computing system isn’t just the right thing to do from an ethical standpoint — it’s in a platform’s best interest. Moderation shapes the platform: as a tool, as an institution for discussion, and as a cultural phenomenon.

 


Paid Moderation

 

With paid moderation, a third party reviews any claims, which helps avoid brigading and supports a more calibrated and neutral evaluation of the content. Facebook is just one example of a platform that utilizes this method, employing about 15,000 content moderators directly or indirectly. If there are three million posts to review each day, that equates to 200 per person: 25 every hour in an eight-hour shift, or under 150 seconds per post.

A demanding job like this can result in major emotional trauma for moderators who spend days on end reviewing troubling content (related: Facebook paid $52 million to content moderators suffering from PTSD). Additionally, evaluators may only have seconds to make a snap judgement about a specific piece of content, so there’s still room for error.

 

Community Moderation

 

We see community moderation in sites like Reddit, Twitch, and Steam. On Reddit, for example, users moderate specific subreddits, removing content that breaks the rules. One user, who moderates over 60 subreddits, considers moderating “a glorified janitor’s job, and there is a unique pride that janitors have… When I’m finished for the day I can stand back and admire the clean and functioning subreddit, something a lot of people take for granted.” 

Stronger actions have worked on Reddit, too: Reddit’s 2015 ban of two subreddits due to violations of anti-harassment policy resulted in accounts leaving the platform entirely (or migrating to other subreddits, drastically reducing their hate speech). On Twitch, community moderators are responsible for removing content and banning users in real time. One study about the pro and anti-social behavior on Twitch showed that moderating content or banning users substantially decreases negative behaviors in the short term.

Community moderation leverages its users’ intrinsic motivation, and local experts of a given corner of the Internet are more likely to have the necessary context to make difficult calls. That said, however, community moderators can often feel bitter that they don’t get the recognition they deserve, and can resent that the platform seems to profit off their free labor. On top of all this, community moderation can vary in quality: it’s not necessarily consistent, fair, or just.

These issues are well articulated in this New York Times article, which writes that moderators are “forces for stability and civility in the raucous digital realm. Or that is, they’re supposed to be.” Community moderators can have extreme power on their respective platforms, and they can wield it with often profound unpredictability. During the infamous 2015 Reddit Revolt, moderators upset with a decision made at Reddit’s corporate level chose to shut down their forums, which collectively garnered millions of visits each day. These moderators wanted to voice their feeling that the company’s leadership “[did] not respect the work put in by thousands of unpaid volunteers.” A week after the Reddit Revolt, Ellen Pao — the company’s interim chief executive — resigned.

 

Algorithmic Moderation

 

When Facebook, YouTube, Twitter, and other tech companies sent workers home to protect them from the coronavirus, they ran into a host of new challenges regarding content moderation: as these platforms began relying more heavily on automated systems to flag content, they began seeing more and more posts erroneously marked as spam because of algorithm weaknesses. Some content moderation can’t be done outside the office due to privacy and security reasons — consequently, these tech companies were suddenly dealing with a lot more information to moderate, and a lot less staff.