Recommendation engines and the uniqueness of dislike

Twitter decided recently that it was going to change its fundamental paradigm and start putting content in your feed that it thinks you want to see, making it the last social site I personally use to try to guess what you want, instead of letting you tell it. It’s an obvious trend; it’s what Facebook has been doing with the News Feed for ages now. Google Plus refuses to let you hide your friends’ +1s, not to mention it’s still trying to find me more friends and teach me how to use it — which is to say, remind me about features it doesn’t think I’m using enough, like those aforementioned +1s.

What’s interesting to me is that even though this approach is so common, it’s still very hard to do well. This came up in a recent lunch conversation at work, unrelated to Twitter, about iTunes. My coworker wanted iTunes Discovery to play music that he didn’t have, but it didn’t seem to be able to do that. It either played music he already knew and liked, or music he didn’t like.

Even Amazon and Netflix, which are widely acknowledged to be relatively good recommendation engines, doing something relatively simple (recommending media), have trouble with edge cases. Another coworker shares a Netflix account with someone who enjoys “chick flick” movies, and watches them on Netflix, but the same person doesn’t like it when Netflix recommends that type of movie to her. Why not? After all, she likes them, so the recommendation engine is doing what it’s designed for. But it’s not doing what she actually wants, which is encouraging her to watch things she would like, but would otherwise have trouble finding. Chick flicks are easy to find. She wants help finding the difficult-to-find.

In the end, this is why automated recommendation systems fail: humans have preferences that are too diverse to code algorithmically, even with algorithms that learn from data. To Netflix’s recommendation system, “I watched this, and liked it” has only one meaning: show me more things like this! To a human, it can mean “I liked this, but I know it was kind of a waste of time to watch, and I’d really like to watch something more interesting next time”.

Take friend-suggestion as the simple case for social networks. Social networks are predicated on the idea that if we like Amy and Andrew, we probably also know and like their friends Bernard and Bailey. In general this is common; I do know and like many of my friends friends’, and if I haven’t met them, I’d like to, because there’s a good chance we’ll get along. But a good chance isn’t a certainty. If I like my friend Mitch because we both do linguistics, but his friend Chad likes him because they play basketball together, and I don’t like basketball, I might not like Chad that much either, even if he’s a fine guy. But social networks don’t have any way of coding that. All they know is that I like Mitch, so I might like Chad too, right? Or maybe after many meetings you just haven’t cottoned to a particular person in your larger circle. Facebook insists that you must know each other, so of course you want to be friends. Right?! But you don’t. You already assessed the situation in person, and decided that you don’t.

In the best-case situation, the social network lets you code that information in some way. On Facebook, click the X button and the suggestion is gone (forever? I’m not sure anymore). Or block the person, if you really don’t want to see anything from them. But while blocking and Xing convey some sort of information back to the system, it’s relatively coarse-grained. I just ignore Facebook’s friend suggestions at this point; like my friend and iTunes, it’s found me all it can find; the rest of its suggestions will never be useful, and I don’t care to X out all of them one by one.

Content is even harder to curate cleverly. Facebook has been trying it for years now with the News Feed, and although they’ve clearly had success in terms of engagement, it’s still an ongoing battle, the latest sally in which is reducing click-bait. Wait, didn’t we start out talking about content that people like, but don’t want to see more of? Oh, those silly humans, they just can’t stop themselves from sending mixed signals! Facebook is struggling with the same problem that my coworker’s friend finds in Netflix: a click currently can only code “I like this, it engages people, show me more of it.” Trying to make a click on clickbait not mean that, but a click on another kind of article keep its original meaning, is challenging to say the least.

I’m fascinated with learning algorithms (in case anyone reading this hasn’t noticed, I studied computational linguistics, and now work for a company that’s all about data) but if you spend much time at all working with them, you start to see their shortcomings very quickly. Humans are really remarkable creatures. Although we’re predictable in many ways, we also all have unique preferences that someone at Facebook, Netflix, or Twitter didn’t think to code in. Don’t want to see tweets from someone you love who passed away? Oops, someone at Twitter forgot to put that in as a criterion…wait, we don’t even have information on your relationship to this person or whether they’re alive? Oh crap. We forgot we aren’t Facebook. Wait, someone else does want to see that kind of tweet? Wait, what? Make up your minds!

While recommendation and curation systems are pretty darn cool as adjuncts to human judgment, intended to assist us in getting what we want, they’re not replacements for it. The data they collect is always incomplete, and their coding of it is always limited, and both of these are informed by their creators’ biases (does anyone remember Google Buzz? Yeah, those biases). Where these systems go wrong is where they assume they know better than the humans using them. There’s a big difference between adding a little box showing me people I might want to follow and insisting that I dismiss such a box before I can see my stream. There’s an even bigger difference between me getting to decide what’s in my feed and Twitter, Facebook, or Google Plus deciding at least some of it. Twitter has just crossed that line, and accordingly I expect it to become more noisy, less useful, and less pleasant to engage with over time, because however well it may think it knows me, its model of my preferences can never fully comprehend my complexity. That’s the uniqueness of dislike.

One thought on “Recommendation engines and the uniqueness of dislike

  1. Permalink  ⋅ Reply

    Jonathan Ichikawa

    August 27, 2014 at 10:51am

    Are you familiar with a web curation program called Zite? I have generally been impressed by its ability to show me things I find interesting. I couldn’t speak to how its algorithm may be similar to or different from the ones you discuss here.

Leave a Reply

Your email will not be published. Name and Email fields are required.