Pando

Facebook won't allow us to promote our story on Google's war against the homeless

Because no shit it contains profanity

By David Holmes , written on June 23, 2015

From The Press Fucking Freedom Desk

Facebook wants to be seen as some kind of heroic force of good for journalism, saving reporting jobs one "like" at a time.

One reason the company pushes this narrative is that its business model -- at least in part -- depends on having news organizations provide it with free content. Recently, these efforts have stretched beyond merely serving as a distribution hub for news stories to hosting the content themselves, in some cases sharing the ad revenue produced by stories news organizations give up to Facebook willingly. It's therefore critical -- we can all surely agree -- that Facebook prove its devotion to independent, truth-seeking journalism, even and especially when it attacks the powerful.

That's why I was more than a little disturbed, and disheartened, at Facebook's response when I tried to promote Yasha Levine's story on Google's war against the homeless in LA.

It's an important piece that deserves as much attention as it can get, and moreover it has been picking up a greater-than-average portion of its traffic from Facebook since we published it yesterday morning.

Under the rationale of Facebook's own external messaging, which encourages publishers to post and promote original journalism that resonates with its audience, it made perfect sense to drop some extra cash on boosting its exposure on the social network.

But a few minutes after submitting the promotion, I received the following message from Facebook:
Your ad wasn't approved because it doesn't follow our Advertising Policies, which apply to an ad's content, its audience and the destination it links to.
We don't allow ads that use profanity. Such language can offend viewers and doesn't reflect the product being advertised.
No fucking kidding there's profanity in the piece. These naughty words come in the form of direct quotes from homeless people who have been allegedly spat on, bullied with night sticks, and intimidated off the public sidewalks in front of Google's LA offices -- in violation of the city's laws -- all because these largely peaceful individuals were scaring the "geeks." And so its no wonder that the victims of Levine's piece used some colorful language in describing their tormentors.
Pretty much immediately I wrote this back to Facebook:
This ad was not approved because it links to a story that contains 'profanity.' This profanity, however, is in form of direct quotes from people negatively impacted by the actions of a powerful corporation. I think it's perfectly reasonable for Facebook -- if it considers itself a serious steward of news -- to approve a promoted post like this, which reveals very real societal ills happening in the state of California. I would ask that because of the social justice this post seeks to bring about that you please make an exception and approve this ad.
It's insane that I even have to explain to Facebook why an article containing profanity might still be of value to the network's users. Hours have passed and I've yet to hear back from the company. It's possible they'll eventually approve the post once a human has considered my request. Of course by then the story will be old and the social traffic might well have lost steam (although hopefully not).
Of course, this robot and human double-act gets at one of the biggest problems with Facebook becoming the dominant platform for news distribution: it almost certainly was not a human who struck down my ad. It was an algorithm, crawling around looking for dirty words and kicking out anything that offends the precious sensibilities of these robots.
Considering the amount of content posted to Facebook daily, it's unreasonable to expect a human to examine each and every post. But if Facebook is to rely so heavily on algorithms, and especially considering its growing influence over what content the world does and doesn't see -- an influence that's rivaling Google in its power -- the company absolutely needs to offer more transparency about how these algorithms operate.
For example: Is all profanity banned? If a "classy" organization like The New Yorker or The Economist posts an article with profanity, is an exception made? What about posts regarding President Obama's use of the n-word on Marc Maron's radio show this week? Do the robots only blush at the full unasterisked slur or is the term "n-word" also unacceptable? And if so, how can Facebook possibly consider itself a responsible steward of news when its algorithms frown upon what was arguably -- for better or worse -- yesterday's most talked-about news story?
As I've written before, Facebook's News Feed algorithm is both a magic wand and a black box. Anytime someone complains about Facebook and says something like, "There are too many listicles and not enough real news," or "Hey, why is my News Feed so squeaky clean that it's bereft of anything remotely interesting," Facebook simply blames it on the algorithms -- which are supposed to respond to what users want -- and "Poof!" there goes the company's sense of responsibility for offering consumers an intellectually-robust newsreading experience. It's like when the New York Post says, "We only put Lindsay Lohan on the cover because that's what our readers want," only in Facebook's case the company takes even less responsibility.
What's worse, the algorithm itself lacks so much transparency that even Facebook's own data staffers need to conduct peer-reviewed scientific research in order to understand it. The logic here is like something out of Joseph Heller's Catch-22: Got a problem with Facebook? Take it up with the robot. Got a problem with the robot? Take it up with Facebook.
Facebook has every legal right to police content on its network however it sees fit. If it wants to block me from promoting this post because I say "fuck," "shit," and "cock," in it, thats its right as a giant, insanely profitable corporation.
Nor is Facebook required to reveal the proprietary innerworkings of its algorithm to the public. Like other companoes such as Uber who guard their data like a mother protects her baby, this "secret sauce" helps Facebook keep its competitive advantage over rival curation engines like Twitter.
But what's troubling is that this mega-corporation, which polices content inconsistently and with a Puritanical mindset that's stuck centuries in the past -- that is, when it's not relinquishing responsibility for how it serves up content altogether -- is becoming a dominant source of news in the digital age. According to recent studies, almost half of all web-using adults -- and 88 percent of Millennials -- use Facebook to find news.
It's easy to understand why news organizations want to post and promote content on Facebook -- you've got to go where your audience is. But episodes like this should news organizations (including the New York Times and the BBC) who have been hosting their content directly on the platform, consider whether Facebook is really the best partner to provide uncensored and often uncomfortable journalism that serves the public good.