facebookredacted

In an era that has seen journalists wage a war on journalism and an establishment news organization float the idea of repealing the First Amendment, I guess shouldn’t be all that surprised that I was censored on my own Facebook page. But when I received this message yesterday from Mark Zuckerberg’s company, I was certainly confused:

facebooksmall1

The rejoinder in question appeared on my own Facebook page in a thread about my recent Pando article. That piece looked at President Obama’s blatant lie about his power to reschedule marijuana under federal law. I was responding to an Obama apologist who mindlessly repeated the lie, and insisted that “you know damned well the Republicans would accuse him of being an Imperial President” if he does anything on any issue.

This agitprop is standard — and infuriatingly mindless — apologism from die-hard Democratic Party loyalists. To them, every instance of overt hypocrisy or dishonesty from their preferred politician is justified because the GOP inevitably will criticize him. Of course, the GOP will criticize Obama no matter what he does, which means that this tautology is designed not to make a factual point, but instead to rationalize anything and everything the White House does regardless of the facts.

It means, in other words, that yes, those who use this self-serving crap to deliberately distract from inconvenient truths are being willfully stupid and are, hence, willful ignoramuses.

You can disagree with me on that. Further, you can think I’m an asshole for my choice of language (“willful ignoramus”). And you should be able to see what I wrote, and tell me all that without the fear of Facebook halting the conversation. As the administrator of my own page, I should have the right to block you if I don’t want to talk to you, just like you should have the right to block me if you don’t want to talk to me.

But the overlords at Facebook should refrain from intervening. After all, providing a forum for disagreements is ostensibly one of the purposes of a Facebook discussion thread. How can that happen if Facebook effectively shuts down that discussion?

For its part, Facebook says it is just trying to enforce “community standards,” and when you read those standards they make perfect sense. But here’s the thing: As evidenced by my little experience and an increasing number of others, Facebook isn’t deploying its censorship power in a way that diligently adheres to its own rules.

In my particular case, asking a user on my own Facebook page to stop being “willfully stupid” or a “willful ignoramus” may not be polite. In retrospect, maybe I should have used less blunt language. However, my response to a commenter on my own Facebook page was obviously not a threat of violence, it was not a pornographic image, it was not an act of harassment, and (assuming Facebook doesn’t claim willful ignorance is a clinical pathology) it was not “an attack based on (a user’s) race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.” In other words, it was not a violation of Facebook’s community standards.

It’s the same thing in other recent cases (all of which are far more serious than my little spat). In the last few years, Facebook has reportedly censored images of a couple and their terminally ill infant, a statement supporting African American films, a non-sexual photo of a woman breastfeeding and comments the company simply deems “irrelevant.” Again, none of that violates Facebook’s community standards.

So what’s going on here?

Some of it has to do with Facebook’s automated algorithms designed to prevent spam. Because they aren’t perfect, they are periodically blocking content that isn’t spam. We can probably take Facebook’s word that these specific episodes aren’t deliberate acts of censorship based on the substance of the content in question.

Some of it also has to do with user retribution against content that makes them angry. Rather than blocking that person or deleting their post from their own page, some users just hit the “report abuse” button against comments they don’t like. In punish-first-ask-questions-later fashion, Facebook may automatically respond by sanctioning the alleged offender. That’s probably what happened to me.

Of course, Facebook, Twitter and other social media platforms shouldn’t get rid of their abuse reporting systems. There’s plenty of nasty stuff online that clearly violates Facebook’s community standards, and there’s other stuff that’s just straight-up illegal. Whether Facebook or any of its competitors, social media companies have a legitimate right and a moral obligation to keep their platforms as free of that toxic sludge as possible.

But when overzealous automation or misguided human decision-making indiscriminately censors material that complies with a company’s explicit community standards, that’s wrong.

From a fairness perspective, a system like that embraces a guilty-before-being-proven-innocent ethos. In the short term, that may be the most efficient way to deal with thousands of complaints on a platform with a 1.2 billion users. Over the long haul, it can affect social media firms’ core business. It can reduce the incentive for user interaction and attendant refresh-button clicks. Worse, if censorship gets really out of control, it can encourage users to simply leave a social-media site altogether and find a place that’s a little less uptight.

For my part, I probably won’t leave Facebook over such a ridiculously silly episode, but I might interact with users less. I prefer contemporaneous discussions; I don’t feel like meticulously editing myself in those discussions, and it’s not worth the hassle of some oversensitive user getting me blocked from my own page. But if this happens more frequently, I might bail. That’s not a threat. It’s an admission of my low tolerance for hassle.

Now, this isn’t a First Amendment issue because it has nothing to do with government limits on speech. This is about private companies, which have every right to decide what can appear, and what gets censored, on their private property (same thing, it should be said, for individual social media users; they should have the right to decide which users get to access their own pages, and which users they want to block).

That, however, doesn’t mean this kind of censorship is unimportant. Just like all the abuses of government surveillance shouldn’t negate an examination of the problems with private surveillance, overzealous social media censorship isn’t insignificant just because it is privately administered. To the contrary, with more people relying on social media for basic communication, and with repressive regimes trying to get companies to censor material those regimes don’t like, private censorship is a serious issue (even if individual episodes of such censorship — like mine — are fairly inconsequential). That censorship may be legal, but it shouldn’t be downplayed or absolved of criticism just because it is being done by a corporation and not a government.

Facebook, of course, may not care about the political, cultural or ideological implications of any of this. They may just see this as a bottom-line calculation. But that gets back to the bottom-line implications of all of this. If social media sites want to keep users coming back over the long haul, they might want to at least be a bit more explicit about what we can and cannot do, and be a little bit more discerning about using their censorship power.

Otherwise, they’re being — yes — willfully stupid about what originally drew so many to social media sites in the first place, and what could drive so many away in the future.