facebook lynch feature

Perhaps the most startling thing about Facebook’s efforts to control the emotions of nearly 700,000 people without their explicit consent is the fact that some people are actually defending the company.

In fairness, some of those people also believe that Google should have a private militia and that Eric Schmidt should become America’s chief executive, so it’s no surprise that they would lend their support to the company’s right to do essentially whatever it wants to its users. But there are other, more rational people who also believe that Facebook should be allowed to “tweak its algorithms” — which is how they characterizethe experiment — without thinking of the repercussions.

The problem is that conflating Facebook’s experiment with the changes tech companies make to countless other algorithms every day is a bit like saying that a doctor shoving poison pills down a person’s throat is okay because other doctors were writing legitimate prescriptions at the same time. It’s a willful misrepresentation of a disconcerting experiment that should be discussed instead of being accepted as part of the modern, algorithm-driven world in which we’ve decided to live.

This wasn’t a simple tweak meant to see what might happen if Facebook were to change its News Feed. The company’s decision to increase the number of news articles or decrease the number of promotional status updates shown in the stream of seemingly-random content fits that description. Its efforts to determine the emotional effect that it can have on its users by toying with nearly 700,000 people differs in all but execution.

And when this study is considered for what it actually is — a psychological experiment that affected hundreds of thousands of people — instead of as a technological issue, it seems far more sinister than its supporters would have us believe. As Max Masnick, a researcher who does “human-subjects research every day,” explains in the Guardian’s report on the study:

As a researcher, you don’t get an ethical free pass because a user checked a box next to a link to a website’s terms of use. The researcher is responsible for making sure all participants are properly consented. In many cases, study staff will verbally go through lengthy consent forms with potential participants, point by point. Researchers will even quiz participants after presenting the informed consent information to make sure they really understand.

Even Adam Kramer, one of the researchers involved in the study, said something similar in a Facebook post defending the study and the company’s motivations behind performing it:

Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.

It’s clear why anyone concerned about the ethics involved with manipulating the emotions of hundreds of thousands of people without explicit consent view Facebook’s experiment as a mistake. So why do some people insist on defending Facebook’s supposed right to experiment on its users, despite being told that checking a box on a Terms of Service agreement isn’t the same as agreeing to participate in a psychological experiment?

Perhaps it’s because the Valley culture has made it easier to think of millions or billions of people as little more than data points used to draw revenues from advertisers. Because this study technically required just a few tweaks to an algorithm, it’s easy to view it as a harmless experiment meant to gather more data, which has become increasingly sacred to the Valley. It’s hard to think that a billion data points is actually composed of individual people who might worry about Facebook’s ability to manipulate their emotions instead of bits and bytes.

Or maybe the simpler explanation, which Holmes suggested in his post about the study and its ramifications, is better: Facebook just doesn’t give a shit about the people who use its service and is more than willing to experiment with their mental health because of the culture of contempt that it has created within the confines of One Hacker Way. It knows that there are people on the other end of those algorithms — it just doesn’t care. It exists to make people spend more time online, not to worry about how other people feel.

Still, it’s hard to imagine why people outside the company would defend its actions. Maybe they also don’t care about other people. Maybe they’ve become so jaded that the idea of a technology company performing psychological experiments seems like a normal Saturday. Or maybe they want to give Facebook the chance to make everyone feel like they do, thus ending the criticism they face for suggesting that Eric Schmidt should be placed in charge of an entire goddamn country. Take your pick — no matter which motivation you choose, the truth is that we must now live in a world where Facebook’s right to unethical experiments is defended.

As Holmes put it, it might be time for Facebook to make all of our News Feeds just a little bit happier. We’re going to need it to get through this week — and perhaps the rest of our lives.

[Illustration by Hallie Bateman for Pando]