Twitter and context collapse

Returning to a theme I’ve explored previously, I recently encountered two pieces about Twitter and context:

Justine Sacco is good at her job, and how I came to peace with her
Forced context collapse or the right to hide in plain sight

The two pieces explore different aspects of the theme, but both of them are partially about what I’ve previously called notability (see more thoughts on this in part II). Notability makes the likelihood of context collapse — things you do or say in one social context (where you might have many meaning cues) percolating out to others (where you often don’t) — much higher.

Twitter makes content produced by millions of different people both publicly-available (if not publicly-owned) and accessible (there’s that notion of accessibility again). Reporters then pick and choose from that content to create stories. Sometimes they create scandal sensations like Biddle did with Sacco. He was able to do that because he didn’t have any context for what she wrote, and without context, it could be read as being horrible. Almost all of us, from time to time, say things that can be read this way (as the author of the article later found out, when he did it). Sometimes we say them in the safety of a context that doesn’t collapse easily.

Sometimes we forget, and we say them in a medium where context collapses are easy. As Tressie’s piece points out, whether journalists have a legal or moral right to take advantage of this — either to do quality reporting, as I’m sure many of them do, or to create scandals or quick-and-easy thinkpieces or funny articles/listicles (ala Buzzfeed) is a somewhat complex question. One of the things that Tressie’s piece seems to be asking, to me, is whether journalists have the moral right to make someone notable, either at all, or because of something they did or said on Twitter. Do we have the right to hide in plain sight? We have difficulty having good conversations about this because of the slipperiness of the language around it, the issue I tried to address when writing my posts, and an issue that Tressie also raises in her tripartite division of the question: legal authority, moral authority, and economic responsibility.

Notability is an interesting part of that area of inquiry, because journalists often make people notable (although of course a lot of the time they merely write about people who are already notable). But usually in the past, you had some idea that you were about to become notable, because they wrote a story about you or about an issue you were highly involved in, interviewed you or at least asked you to review it…all those things journalists usually do when they do stories about or heavily involving people. Even so, sudden notability in the era of the Internet can have effects people don’t anticipate. But what if you have no idea you’re about to become notable? I wouldn’t be too surprised if my Twitter feed contains things I wouldn’t really want broadcast to the world, in spite of the fact that technically speaking I did broadcast them to the world. The context of people who read my Twitter feed is small (425 accounts right now, according to my widget) and it’s biased toward people I personally know, and who therefore have some idea of what I’m like, and what kinds of things I’m likely to say and think. People who can guess whether I’m being ironic.

To quote Tressie:

I sign up for Twitter assuming the ability to hide in plain sight when my amplification power is roughly equal to a few million other non-descript [sic] content producers. Media amplification changes that assumption and can do so without my express permission.

When I’m unnotable, my content being both publicly-available and easily accessible doesn’t matter. If I suddenly become notable, it does. If I make myself notable or embark on an activity likely to make me notable, that’s one thing — I have the chance to consider the possibility of context collapse before I experience it. If someone else does it for me, using their power they strip me of the chance to consider that it might not be possible anymore for me to hide in plain sight (a description I like for what it means to be unnotable). And not only journalists do this but other private citizens (Gamergate harassment being one of the hugely scary examples of this recently).

What happened here, I think, is that we all (by which I mean, anyone who publishes their thoughts on the Internet such that they’re publicly available) became published authors, at the same time as it became far easier to spread published information (and the two changes are obviously closely intertwined). Any published author has always been at risk of this type of stripping of context since their words can be taken out of their original work and quoted and spread. When becoming an author was a process, becoming notable was a known possible (and maybe often desired) quality of it. Now that it’s not much of a process, most of us just aren’t thinking of the possible consequences when we undertake it.

Even more stickily, it’s frequently legal to republish something published, under the doctrine of fair use, although it depends on what use you’re putting it to exactly. More practically, it’s very difficult to get people to stop doing that once they start, if the content generates a strong social reaction. If someone takes a tweet of mine and publishes it in a related news story, how likely am I to get it taken out? Not freaking very. This story chronicles one photographer’s attempt to get Buzzfeed to compensate him for use of a copyrighted photo. It was a lot of effort, and that’s a case where it’s much clearer that the site needs to get permission (because it’s a full reproduction of a copyrighted piece of content for commercial gain, and because licensing terms on Flickr are more clearly spelled out than they are for tweets).

We don’t have an existing legal right, that I know of, to hide in plain sight unless we consent to fame. I’m not even sure it’s possible to create one, let alone desirable, because the problem here isn’t really legal, it’s social. But considering the possible consequences, maybe we should at least be talking about it.

OKCupid: as clueless as Facebook, but not as evil.

Much has been made recently of this post on the OKCupid blog. In this post, OKCupid “confesses” to experimenting on users in order to verify that their algorithm works, in such a tone as to suggest that this is an obvious thing that everyone does and what of it?

In the process, Rudder (the post’s author) fails to grasp the distinction between what Facebook did that garnered so much opprobium and what OKCupid did (which I and I think most people would join him in considering fairly routine).

What kind of experiment?

OKCupid’s experiment is manifestly related to the purpose of the site from its users’ point of view. They were trying to verify that their algorithm for matching users worked better than a placebo. This is actually both fairly decent experimental design and fairly decent behavior. The match algorithm is the purpose of OKC’s existence as far as its users are concerned — they’re there because the algorithm should be offering them better than random chance of hooking up with someone they’ll actually be compatible with. If it doesn’t work, OKC isn’t doing its job. Ergo, testing the algorithm is important, and beneficial to users. Plus, they tested it against telling people that they are a good match, which is fairly perceptive — they rightly deduced that such information would be likely to have a substantial placebo effect, and decided to check whether they could do better than just saying people are a good match and determine that they actually are.

(The actual outcome of this experiment sort of surprised me — they’re not as much better than placebo as I expected. Humans are easy to influence.)

Facebook’s experiment that got them in trouble wasn’t clearly related to the purpose of the site. You can make some arguments that it’s indirectly related, but doing an experiment (a badly designed one at that) to determine whether emotional contagion is a thing does not clearly relate to the stated purpose of Facebook. It’s not clear that Facebook has a single purpose, but let’s take “connecting with people we care about” as a vague one for its users. If Facebook wants to change the proportions of things in my News Feed to see if I spend more time on the site or share more things or comment more (I’m sure it does do all of those things), that would be kind of like what OKCupid did. Instead they deliberately changed the proportions of things in the News Feed with the goal of finding out whether it made people feel / behave more negatively. That’s not beneficial to anyone, really. It’s just experimenting for experiment’s sake, and even if they hadn’t published it in a journal I’d think it was an asshole move, as well as being bad experimental design (sentiment analysis of short texts is known to be unreliable). But it wouldn’t have been scientifically unethical.

Experiment vs Science

“Experiment” is so often used in a scientific context that I think it’s easy to forget that we all do experiments all the time — we take actions and we have hypotheses about the outcome and we compare what the outcome was with what we expected it to be. (I do it for a living, for goodness’ sake — what is troubleshooting but a set of experiments designed, ideally, to eventually fix a problem?) But doing an experiment and then trying to make it part of the body of scientific knowledge frequently requires all kinds of additional hoops to jump through — proper experimental design, valid statistical analysis, and, importantly, informed consent if you’re going to do it on human subjects.

When I originally posted about this (ironically, on Facebook itself) informed consent is the issue that I focused on, and it’s clear that Christian Rudder isn’t the only one who doesn’t understand it. There’s a good analysis of the issue at ScienceBasedMedicine.org which clearly discusses what informed consent is (and why Facebook’s TOS doesn’t meet it) as well the limits on the requirement for informed consent. It’s really quite a limited requirement; although it’s a research best practice, it’s only required for anyone at or collaborating with universities, using federal research money, and publishing in certain journals. So you can even contribute to scientific knowledge without doing it, as long as your collaborators, funders, and publishers don’t mind.

Facebook and the journal that published their research did not follow this guideline even though it’s required by the journal’s policy and their collaborators’ institutional policy. What they did is therefore unethical, as well as an asshole move. As I put it in my original post:

In an attenuated sense, informed consent is an extra bar you have to clear to be considered to have done real science that you can publish in a reputable journal — it’s a kind of trade deal…if you don’t collaborate with universities or use federal funding, you don’t have to clear the bar, and can still publish if the journal doesn’t require you to meet those standards either, but at that point you lose a lot of the brand recognition you get from publishing with academics in a well-known journal.

The history of informed consent is too long to recap here (I recommend The Immortal Life of Henrietta Lacks, if you’re in the market for a book about it), but it’s a very important safeguard in keeping researchers from harming subjects’ without the subjects’ knowledge, or extracting benefits that only go to the researcher(s) and that the subjects don’t in turn benefit from. The purpose it serves is in making the body of scientific knowledge and the practice of science something that people can trust, particularly in the area of medical research, but also in the area of social science research. Also it keeps people from being harmed or from failing to benefit when they haven’t OK’d it (e.g. from being given a placebo but told that there is 100% chance they are getting real medicine), which I hope we all agree is a good thing.

Facebook wanted to get all the benefits of science without any of the drawbacks, that’s what made (scientists at least, or people trained in that mode) so specifically pissed off about what they did. OKCupid didn’t do that — they didn’t even publish their research until they felt like making a point with it. And I hear Rudder’s writing a book, so he doesn’t need to worry about peer review and federal funding. (Unless the book gets bad reviews, in which case he might wish he had gotten some peer review first.)

Unfortunately, his cluelessness about these two important distinctions tells me that only circumstance and luck keeps them from being equally awful. Maybe we do need to have a bigger conversation about whether social experiments on unwitting site users are ever okay, if only to improve people’s understanding of the issues involved.

Privacy, etc. II

I got some offline feedback on my last entry, with the effect that I rethought a few things. Here are some of the new thoughts:

Anonymity. The way I defined this previously was “being out in public without being notable”. This isn’t a very good definition, because, as Gavin pointed out, anonymity actually has a more technical definition that’s important to preserve, namely: being in public without being known. So works of art can be anonymous, in that they are well-known but no one knows who made them (they are completely unsigned). Or a person can be anonymous by being in disguise or otherwise completely unrecognized. Or information can be made anonymous, “unconnected to an identity”, by purging it of identifying information, like aggregated web search data unconnected to IP address or other similar identifiers.

Gavin suggested that the concept I defined previously could be described as being “unnotable” or “unnoticed”. Perhaps a better word is needed, but having both concepts is certainly more useful.

Another concept that I didn’t define explicitly, but left under the umbrella of privacy, is pseudonymity. This is a very important concept in modern web communications since so much information these days is attached to usernames. When is a pseudonym truly unconnected to a person’s “real identity”? This can be a challenge to determine, and a lot of pseudonymous information is poorly protected because of subtle identifiers in the information or interconnection between pseudonymous information and information filed under a “real name”; it can also become an issue when pseudonym or username is used for multiple sites, services, or types of works. It’s often easier to find a person’s data on the web once you know one of their common usernames than it is when you know their name. Usernames are, by their nature as keys to a specific record, more unique than names.

I also am not that fond of my definition of notability. It doesn’t seem to me to require numbers, but only a certain level of significant interest. However, that’s pretty hard to describe and define.

Finally, Dave wrote me an extensive discussion of yet another concept relating to accessibility: risk.
Risk is what you have when information is accessible to some people, but not others, because there’s a risk of failure of the safeguards that prevent it from being accessible to everyone (loss or deliberate breakage), as well as a risk of legal decision that the safeguards must be removed (search warrants, subpoenas).

Dave sums his discussion up thus: “Heightened accessibility, even if it is well-understood under normal conditions, still creates the prospect of lowered privacy.”

This is, I think, one of the big deals about accessibility that makes people pitch a fit about sudden increases in it.

Privacy, Accessibility, and Notability

As a result of some long-ago and more recent conversations with smart friends of mine, I came up with some interesting thoughts about privacy.

I don’t fully understand the legal umbrella of privacy, but it seems to me that there are a few distinct concepts that it would be useful to introduce into quasi-legal/common-sense discussions of privacy, and potentially to the legal arena too, in the long run.

First, a brief rundown of the concepts, before we get into their interactions and complications.

Privacy. Things that are private are things that you do on private property not visible from a public space, or public spaces where you have “a reasonable expectation of privacy”, and that you don’t speak or publish about in publicly-accessible forums — or if you do, those forums are specifically unconnected to your “real identity”. Also, things are private which are defined by law to be private, but that’s less important here than the nontechnical definition.

Accessibility (or Ease of Access). Things that are accessible are things that are easy for the average person or user to find. This is not a great term because “accessible” also has a technical binary definition related to privacy: if information is not at all accessible, it is private. But bear with me for a while, and suggest a better word if you have one.

Notability. Things that are notable are things that a substantial percentage of people (in the whole population or some subgroup) is interested in knowing about.

Anonymity. Being out in public without being notable.

The complexities of online “privacy” often come up when something besides privacy is involved, namely accessibility or notability. In my old journal, I wrote an entry about Google Street View (and Facebook News Feed, to some extent) in which I used the terms “theoretical privacy” and “actual privacy” rather than using the word “accessibility”, although I did notice, on re-reading the comments, that I start to talk about information being “(easily) accessible”.

GSV and FNF are iconic examples of things that “raised privacy concerns” without actually doing anything to change whether information was private or not. All the information on GSV and FNF was always available (to anyone who set foot in a place, in the case of GSV, and to anyone who previously had access to the info, in the case of FNF). What they did do was make it incredibly easy to find things out that previously had required a lot of effort to find out: what a place looks like at ground level, and what your friends are doing on Facebook. So the information became accessible (in the sense defined above) where before it had been inaccessible.

Notability is implicated in most problems where accessibility becomes an issue. If information is not notable (no one is really interested in knowing it), it doesn’t matter if it is easily accessible or not: no one cares, either way. Dave sent me a link today (which spawned this whole thought process on my part) about a guy whose information suddenly became notable. The guy didn’t mind, but it gave him pause for thought, as I’m sure it would most of us.

In the FNF and GSV cases, nothing became differently notable, just differently (more easily) accessible. This is closer to a form of privacy loss, because it makes something notable easier to find, and if something notable is found, you have much easier access to it. BoingBoing readers had many things to say about it, some of them wondering if we need new laws, or a new area of law, to deal with accessibility of information, since it isn’t covered by traditional privacy law.

Personal conduct in public, combined with YouTube and other video-upload services, illustrates a different set of circumstances. Most of us who live in largish urban areas, most of the time we’re in public, are anonymous: out in public without anyone particularly caring who we are. We feel restricted in our activities by our visibility, but don’t need to worry very much about anyone caring what we’re up to, even if we’re eating cookies when we’re supposed to be on a diet, or smoking when we said we quit. The situation isn’t the same in smaller communities, of course. In small communities, it’s hard to be out in public without being known.

Even in larger communities, recording and uploading a person’s behavior to a video site like YouTube makes it more accessible, but doesn’t necessarily make it more notable (consider all the incredibly boring YouTube videos that no one watches). Likewise, a person’s behavior becoming an object of attention/controversy would make it more notable but not more accessible: you’d still have to actually find the person to see what they were doing. When you get the simultaneous combination of accessibility and notability, you get something like the recent BART shooting video + controversy or the Caltrain cyclist arrest. But another worrying situation is when something goes up earlier, and then later becomes notable (like the guy’s photos as linked above, or like Facebook photos of undergrads drinking which get them in trouble).

How do we live our lives in a world that is increasingly a participatory panopticon? How do we act in public? What do we publicize and what do we keep private when things could become far more accessible or notable in the future than we ever imagined?

Would you rather stupid or arbitrary?

Some time back, I wrote about the TSA’s policies on knitting needles. Not surprisingly, it isn’t just the TSA which seems to have trouble defining what or why the issue is with knitting needles.

On my way back from London yesterday, the guy at the Continental counter — not an airport screener — asked me if I had anything in my carryon which could be used as a weapon. I thought about it and said no with the possible exception of knitting needles, but the ones I was carrying were bamboo, dull-tipped, and had made it through US security on the way here (all true).

He said that nevertheless I should check them because they aren’t permitted. What really got to me about this is that he said that the airline permits them (also obviously true since I was previously allowed on board with them and they weren’t at any point interrogating me or any old ladies about the contents of our bags) but that security doesn’t, and the reason that security doesn’t is that they are trying to follow what the Americans tell them to do.

The first part of what he said turns out to be true, though I had no way of verifying that at the time except by either leaving the line and walking over to ask them or completing checkin and trying my luck. The Gatwick airport website specifically indicates knitting needles of all kinds as not to be packed in “hand luggage” (the British term for carryon luggage). But the second part is clearly untrue, and I really wish that people would not give bogus excuses like that for their stupid policies. I said rather crossly, but still politely, to him that this obviously had nothing to do with US airport security policy since the US has no such policy, and moved the knitting bag into my checked suitcase.

In Newark I moved it back to my carryon before customs and got absolutely no comment when I went through security again. Whatever excuse Gatwick airport (and it is just Gatwick and a few other airports — neither the government nor BAA which runs many British airports forbids knitting needles!) have for forbidding my knitting needles, it isn’t US security. But I must say, they don’t have an arbitrary policy — just a stupid one.

Finally, some decent news on wiretapping

House Democrats have some vertebrae left in their spine, at least, having refused to rubber-stamp telecom immunity. Good for them, even if the whole issue of warrantless wiretapping in general is still a mess. If you’re less lazy than me, you could write to your House rep congratulating or reproving them, but I’m going to go rest now.

I’m still coughing over here, but less so, and whatever nasty little bug is causing my illness has now diversified into sinus crud, but I felt like cooking today (VeganYumYum‘s Hot and Sour Cabbage Soup — tasty, easy, and sinus-clearing!) and should be back at work tomorrow, so life is returning to normal.

Evocation of a falling empire

Greenwald writes:

That has become Congress’ only role, its only power: to endorse what the President decrees. Like the sad, impotent Roman Senate which existed only to lend its imprimatur to the Emperor’s conduct, the Congress’ only choices are — as it did yesterday — to plead for “re-consideration,” and then, when it’s not forthcoming, either do nothing or endorse the President’s behavior.

Not only a highly evocative description and highly relevant to the latest nonsense on the FISA bill, but one of a falling empire — which is what our country is, in case you haven’t noticed lately. It’s extremely depressing, because instead of just stopping the empire-like behavior, we’re doing crazier and crazier things and managing them worse and worse.

I’m especially discouraged to learn that the other side is smearing the EFF, of all things, lumping them in with rich trial lawyers. Because a non-profit law firm protecting people’s rights is oh-so-lucrative…are they crazy? The EFF is a wonderful organization and personally and civically very dear to me. Which reminds me, where is my checkbook? Time to send them some money.

Dodd against immunity

The man himself speaks.

Keep emailing or calling your Senators.

From Glenn Greenwald, after the Judiciary committee version (no immunity) was tabled:
The pro-immunity, pro-warrantless eavesdropping Democrats: Rockefeller, Pryor, Inouye, McCaskill, Landrieu, Salazar, Nelson (FL), Nelson (NE), Mikulski, Carper, Bayh, and Johnson. Neither Clinton nor Obama bothered to show up for any of this.

And they’re going to provide leadership to us in the next four years? Really?

If any of your Senators are the people listed above (or you once lived in that state, or you’re just looking for more people to email) then try these nitwit non-Democrats.

Greenwald also sums up why I’ve just completely lost my patience with these ‘Democrats’ in Washington.

“Democrats find themselves in the same corner they were in last summer: on the one hand their base demands they block expanded domestic spying powers for the Bush Administration; on the other, they can’t risk looking soft on terrorism, especially nine months before national elections. Senate majority leader Harry Reid is angling for another month’s extension of the PAA, but that would only give the Republicans a third bite at the apple in late February….”

Here we have a perfect expression of the most self-destructive Democratic disease which they seem unable to cure. More than anything, they fear looking “weak.” To avoid this, they “cave” and surrender and capitulate and stand for nothing. As a result, they are, as here, endlessly described in the media as “caving” and surrendering. As a result, they look (and are) weak. It’s a self-destructive cycle that has no end.

Until we elect some Democrats who don’t do this, I suppose. I swear, we need a litmus test of our own, only it’ll be based on the Constitution. “Sorry, Mr. Rockefeller, Mrs. Feinstein — you can’t run for office as a Democrat if you don’t believe in the Fourth Amendment.”

I can get quite eloquent (and not a little rude) when angry

Guys, it’s time for action on FISA again. NOW. TODAY. I was on blog silence last time this came up, but it’s time again. Start with Glenn Greenwald for a decent overview of the politics. There should be a link in there somewhere explaining the principles too, if you’re not already familiar with the fact that the NSA spied on Americans without warrants, and the telecoms companies (mostly) spinelessly caved and cooperated, despite the fact that it’s completely illegal. Now the telecoms want to be saved from their own idiocy and the NSA wants to keep wiretapping us without warrants. I say no way. Write to your Senators, and Senators Reid (pro-spying) and Dodd (willing to risk a ton of political capital and filibuster any bill with immunity provisions), as well as Sens. Obama and Clinton, who are doing fuck all to help Dodd here because they are more interested in holding office than in doing something with it.

Here’s my letter to Reid. I am more than a little pissed off at him. I resisted using the word “toady” in the letter, but only just.

Senator Reid,

I am disgusted to hear that you are still serving the Bush administration’s agenda on the matter of telecoms immunity and NSA wiretapping as addressed in S.2248, the ‘Protect America’ Act. You reject taking time on this important issue because Senators ‘have places to go.’ In fact, the most important place a Senator can be is on the floor of the Senate, debating matters that are essential for national security. These issues should not be rushed through. You are using this only as an excuse to hurry the bill through, forcing your own party to loudly filibuster this bill, where you have declined to force the opposing party to do the same to similar bills that they don’t want to see pass.

This act will not protect America from anything. It is contrary to the deepest American values to allow spying on Americans without cause, and exempt the perpetrators from responsibility. You are failing in your duty as a leader of your country, you are failing in your duty as a leader of your party, and you are failing in your duty as a citizen and government official to uphold the Constitution of the United States.

Reverse your position on this matter. Allow the bill time to be considered. Listen to the many Americans who do not believe that secret spying will make us safer, who want the perpetrators brought through the justice system so that their actions can be fairly and objectively assessed for legality. Listen to the Senators who say that this bill can be passed without these repugnant provisions.

Be a Democrat and a patriot, Mr. Reid. Just this once.

[signed]