facebook_knife

Last month, it was revealed that Facebook had conducted an experiment on nearly 700,000 members without their knowledge or consent to determine how its News Feed algorithm affects users’ emotions and moods. Showing users more positive content resulted in them sharing more positive posts, and vice versa with negative content. Many users, journalists, and data scientists were at once outraged at the ethical implications of Facebook toying with emotions, and disturbed by the power its algorithms hold over the well-being of over one billion people who actively use it each month.

The silver lining here is that it’s opened up a dialogue over the influence tech companies wield with their algorithms, and how to keep firms accountable for how they use them. Yesterday, The Berkman Center for Internet & Society at Harvard University hosted a conservation about the best research approaches and practices for determining how these algorithms work, and how they affect everything from our emotional wellbeing to our buying habits.

The first big problem posed to researchers, writes MIT PhD student J. Nathan Mathias on MIT’s Center for Civic Media blog, is that not all companies release the code underlying their algorithms. And even when the code is public, as are the case with Reddit and Netflix’s recommendation prize winners, researchers would also need access to the firm’s full data sets to fully replicate the algorithm for experimental purposes, said Research Professor and Associate Professor in Communication Studies at the University of Michigan Christian Sandvig at the event.

Surveying users is another method, but that’s dependent on a number of subjective factors, like a respondent’s memory and perception of their experience. “It’s difficult,” Mathias writes, “to ask very large numbers of people very specific questions like ‘seven days ago, did your news feed have fewer words’?”

Another technique Mathias points out is the use of so-called “sock puppets,” fake user accounts created by the researcher to test outcomes. Unfortunately, this practice can run afoul of computer fraud laws.

To overcome these hurdles, Karrie Karahallos, an associate professor in computer science at the University of Illinois, designed a clever and elegant experiment. Forty test subjects (who, unlike in Facebook’s own experiment, provided their consent for the use of their data in the research) were shown two Facebook feeds side-by-side. One was curated by Facebook’s News Feed algorithm and was identical to what the user would see when logging into the site normally. The other was a raw feed of everything shared by the users’ friends or by the pages they follow. Posts that appear in both feeds are highlighted.

The first thing that jumped out to Karahallos’ team was that most of the test subjects, 62.5% were unaware or uncertain that an algorithm was at work when they visited Facebook. “Often, people became very upset when posts from family members and loved ones were hidden,” Mathias writes. Yet fascinatingly, their discovery of the algorithm’s existence eventually made them use Facebook more, not less.

“After learning about the algorithm,” writes Mathias, “when users went back to Facebook, they reported using Facebook’s features more, switching more often between top stories and most recent stories features, being more circumspect about their likes, and even dropping friends. Since that time, some users became more involved and spent more time on Facebook.”

In other words, transparency can often foster better engagement than secrecy. That’s probably a good thing for tech companies to realize. However, as noted by Cedric Langbort, a professor and member of the Information Trust Institute at the University of Illinois, opening up a dialogue about its algorithms is also a way for Facebook to get even more data from you.

Karahallos eventually wants to allow anyone to use her dual-feed app, in an attempt to crowdsource insights from as many Facebook users as possible, with their consent of course. Interestingly enough, that puts her team in a similar position as Facebook as a steward of massive personal data sets. Would not an advertiser be interested in Karahallos’ findings in order to take better advantage of Facebook’s algorithm and to better target their ads to consumers? What are her ethical responsibilities when making her findings public?

In any case, the importance of what these researchers call “algorithm auditing” can’t be overstated. From a psychological perspective, the practice helps determine how these algorithms impact users’ thoughts and decisions, and from a legal perspective it can be used to ferret out potential discriminatory practices based on race, religion, or other attributes. In fact, the notion of these “collaborative audits” as proposed by the panelists are rooted in housing discrimination audits, writes Mathias.

Despite widespread condemnation of Facebook’s emotions study, not everyone was so upset. Some adopted that favorite pose of the NSA defender, asking “Why is anyone surprised?” Others felt genuine sympathy with the company. They were just trying to make the News Feed better, people!

But the goal of these emotional audits isn’t to crucify companies, it’s to learn. And as bigger and bigger chunks of our lives, digital or otherwise, are filtered through algorithms that purport to “optimize” the experience (For whom? Users? Advertisers?), understanding how they work and holding companies accountable for how they’re implemented is more important than ever.

[illustration by Hallie Bateman for Pando]