dating_game

Last week, OkCupid published a blog post sensationally-titled, “We Experiment on Human Beings!” In the post, the company’s cofounder Christian Rudder detailed a series of tests the site conducted on users, the most controversial of which involved telling prospective couples that they were 90 percent matches when in fact they were only 30 percent matches. This was done, Rudder writes, in order to test the efficacy of its matching algorithm and to measure the power of suggestion when it comes to attraction.

At the time, I wrote that OkCupid’s experiments weren’t nearly as egregious as those conducted by Facebook, which had tweaked its News Feed algorithm to show some users more positive stories and others more negative stories to see if this altered people’s moods. It did. The reason OkCupid’s experiments weren’t as troubling to me was because Facebook is in the business of leveraging data in order to sell more ads, and so the argument that the company was merely A/B testing its product to improve the user experience doesn’t exactly hold water. OkCupid, on the other hand, seemed to be legitimately trying to gain insight into human behavior in order to create better matches. In other words, OkCupid wants to manipulate its users into having a better experience on its site. Facebook wants to manipulate users into buying more crap.

But as Rudder continues to engage with the media in the fallout of this controversy, I’m beginning to rethink my initial casual reaction to the experiments. Late last week, Rudder came on TL;DR, a sister podcast of NPR’s On the Media* to talk with Alex Goldman about these user manipulation tests. And while I still think Facebook’s tinkering was far worse for a number of reasons. as I detailed in that earlier post, the OkCupid founder’s arrogantly flippant attitude in regards to these experiments raises some red flags about how responsible the company will be in the future when it comes to toying with its users’ emotions.

Listen:

As NYU journalism professor and First Look Media adviser Jay Rosen pointed out on his PressThink blog, the most troubling soundbite arises when Goldman asks if OkCupid would consider bringing in an ethicist to vet the experiments.

Rudder’s response: “To wring his hands all day for a hundred thousand dollars a year?”

I guess that’s a no.

Okay, so personally, I did not think that OkCupid’s experiments went over the line. First off, the algorithm that determines compatibility is hardly magic — it basically lines up how similarly two users answered a series of survey questions on everything from politics to hygiene. And as any avid OkCupid user knows, you learn more about a person within five minutes of in-person conversation than you do from a profile, which is often embellished, and a little number.

However, Rudder’s pompous reaction to the mere suggestion that ethics matter when it comes to user manipulation is disturbing. Telling mismatches that they were matches, then quickly informing users that this had been done is one thing. But what if Rudder were to, following in Facebook’s footsteps, hide some messages sent by users that contained more positive words but leave up messages with negative terms? Would a user that appeared to be “playing hard to get” create more attraction from the recipient? I would be interested to find out, but I would be incensed if OkCupid had deleted messages I sent in the interest of “improving the site experience” or “probing the mysteries of human attraction.”

Rudder also deflected criticism by saying the manipulation done by tech sites is nothing compared to what makeup companies do to make women feel insecure. Why aren’t people angry about that, he wonders. Goldman responds by saying, yes, people are angry about that. Just because a company in an entirely different field is doing something far worse doesn’t mean we shouldn’t hold large tech firms, whose influence is only increasing, to higher standards.

Rudder also laughs at the notion of “informed consent,” stating that when psychology departments conduct experiments, subjects don’t know exactly what is being tested either — after all, that could potentially spoil the results. But a psychology experiment is limited in scope and time, whereas sites like Facebook and even OkCupid have become utilities in our lives — we are constantly under observation and experimentation as long as we keep these companies as parts of our regular routines.

But the biggest false equivalency struck by Rudder and other defenders of user experimentation is that A/B testing, from a site design or usability perspective, is no different than manipulating the moods and potentially the relationships of users, regardless of how noble the goal is. Again, with the limited scope and quick disclosure of OkCupid’s 30/90 percent match experiment, I doubt anyone lost out on meeting the love of their life (or ended up marrying a total trainwreck) because of the tinkering. However, Rudder’s unfailing belief that these experiments raise few problems from an ethical perspective makes me seriously wonder how far the site is willing to go in the interest of “improving its algorithm.”

As more and more of our experiences, both digital and otherwise, are filtered through often opaque and ever-morphing algorithms, it’s crucial to keep tech companies accountable for the consequences of their constant tinkering. As I wrote a couple weeks back, these consequences can often have legal or discriminatory ramifications that go far beyond a person having a lousy date. And after hearing how blinded Rudder is to the ethical concerns of OkCupid’s meddling, the need for this accountability is more pressing than ever.

[Image via From Aspire to Beyond]

*An earlier version of this post said the interview was conducted by On the Media, but it was done by its sister podcast TL;DR