OKCupid: as clueless as Facebook, but not as evil.

Much has been made recently of this post on the OKCupid blog. In this post, OKCupid “confesses” to experimenting on users in order to verify that their algorithm works, in such a tone as to suggest that this is an obvious thing that everyone does and what of it?

In the process, Rudder (the post’s author) fails to grasp the distinction between what Facebook did that garnered so much opprobium and what OKCupid did (which I and I think most people would join him in considering fairly routine).

What kind of experiment?

OKCupid’s experiment is manifestly related to the purpose of the site from its users’ point of view. They were trying to verify that their algorithm for matching users worked better than a placebo. This is actually both fairly decent experimental design and fairly decent behavior. The match algorithm is the purpose of OKC’s existence as far as its users are concerned — they’re there because the algorithm should be offering them better than random chance of hooking up with someone they’ll actually be compatible with. If it doesn’t work, OKC isn’t doing its job. Ergo, testing the algorithm is important, and beneficial to users. Plus, they tested it against telling people that they are a good match, which is fairly perceptive — they rightly deduced that such information would be likely to have a substantial placebo effect, and decided to check whether they could do better than just saying people are a good match and determine that they actually are.

(The actual outcome of this experiment sort of surprised me — they’re not as much better than placebo as I expected. Humans are easy to influence.)

Facebook’s experiment that got them in trouble wasn’t clearly related to the purpose of the site. You can make some arguments that it’s indirectly related, but doing an experiment (a badly designed one at that) to determine whether emotional contagion is a thing does not clearly relate to the stated purpose of Facebook. It’s not clear that Facebook has a single purpose, but let’s take “connecting with people we care about” as a vague one for its users. If Facebook wants to change the proportions of things in my News Feed to see if I spend more time on the site or share more things or comment more (I’m sure it does do all of those things), that would be kind of like what OKCupid did. Instead they deliberately changed the proportions of things in the News Feed with the goal of finding out whether it made people feel / behave more negatively. That’s not beneficial to anyone, really. It’s just experimenting for experiment’s sake, and even if they hadn’t published it in a journal I’d think it was an asshole move, as well as being bad experimental design (sentiment analysis of short texts is known to be unreliable). But it wouldn’t have been scientifically unethical.

Experiment vs Science

“Experiment” is so often used in a scientific context that I think it’s easy to forget that we all do experiments all the time — we take actions and we have hypotheses about the outcome and we compare what the outcome was with what we expected it to be. (I do it for a living, for goodness’ sake — what is troubleshooting but a set of experiments designed, ideally, to eventually fix a problem?) But doing an experiment and then trying to make it part of the body of scientific knowledge frequently requires all kinds of additional hoops to jump through — proper experimental design, valid statistical analysis, and, importantly, informed consent if you’re going to do it on human subjects.

When I originally posted about this (ironically, on Facebook itself) informed consent is the issue that I focused on, and it’s clear that Christian Rudder isn’t the only one who doesn’t understand it. There’s a good analysis of the issue at ScienceBasedMedicine.org which clearly discusses what informed consent is (and why Facebook’s TOS doesn’t meet it) as well the limits on the requirement for informed consent. It’s really quite a limited requirement; although it’s a research best practice, it’s only required for anyone at or collaborating with universities, using federal research money, and publishing in certain journals. So you can even contribute to scientific knowledge without doing it, as long as your collaborators, funders, and publishers don’t mind.

Facebook and the journal that published their research did not follow this guideline even though it’s required by the journal’s policy and their collaborators’ institutional policy. What they did is therefore unethical, as well as an asshole move. As I put it in my original post:

In an attenuated sense, informed consent is an extra bar you have to clear to be considered to have done real science that you can publish in a reputable journal — it’s a kind of trade deal…if you don’t collaborate with universities or use federal funding, you don’t have to clear the bar, and can still publish if the journal doesn’t require you to meet those standards either, but at that point you lose a lot of the brand recognition you get from publishing with academics in a well-known journal.

The history of informed consent is too long to recap here (I recommend The Immortal Life of Henrietta Lacks, if you’re in the market for a book about it), but it’s a very important safeguard in keeping researchers from harming subjects’ without the subjects’ knowledge, or extracting benefits that only go to the researcher(s) and that the subjects don’t in turn benefit from. The purpose it serves is in making the body of scientific knowledge and the practice of science something that people can trust, particularly in the area of medical research, but also in the area of social science research. Also it keeps people from being harmed or from failing to benefit when they haven’t OK’d it (e.g. from being given a placebo but told that there is 100% chance they are getting real medicine), which I hope we all agree is a good thing.

Facebook wanted to get all the benefits of science without any of the drawbacks, that’s what made (scientists at least, or people trained in that mode) so specifically pissed off about what they did. OKCupid didn’t do that — they didn’t even publish their research until they felt like making a point with it. And I hear Rudder’s writing a book, so he doesn’t need to worry about peer review and federal funding. (Unless the book gets bad reviews, in which case he might wish he had gotten some peer review first.)

Unfortunately, his cluelessness about these two important distinctions tells me that only circumstance and luck keeps them from being equally awful. Maybe we do need to have a bigger conversation about whether social experiments on unwitting site users are ever okay, if only to improve people’s understanding of the issues involved.

Leave a Reply

Your email will not be published. Name and Email fields are required.