Ethics smethics

Facebook and OkCupid both got some bad press a few months back for doing experiments on their users. Members of the regular public and social scientists were incensed, alike.

To the public, experiments on emotions conjures images of men in white lab coats holding people’s eyelids open with forceps, and to social scientists this sounded cavalier in the extreme, compared to the well controlled and regulated process that we go through when we do research with the public. But for all that, what they did really wasn’t bad. It was probably even a good thing.

What Facebook did was this: You don’t see every status update that your friends post, because for most people there would be too many. Facebook has a mysterious algorithm that picks out a set for you to see. They tweaked that algorithm for a whole chunk of users so that it very slightly tweaked the chance of seeing updates depending on whether they contained a pre-selected list of positive and negative emotional words. See here for a legalistic take on whether this complied with research rules (it probably did), and here for a more philosophical take.

OkCupid, for their part did all kinds of things, including hiding pictures for a while, and very occasionally fibbing to people about which other users their data suggested they would want to date. See here for their rather entertaining blog post that describes a lot of these in detail.

 

Experimenting with our emotions and lying to us, it sounds terrible, right?

Clockwork Orange

But as sinister as it says to say “conducting experiments on our emotions”, you could relabel their exact same activity as “product testing”, and it would have drawn shrugs from people rather than anger. And in some very real sense, that is what it very much what they were doing.

They weren’t dragging random people off the street and forcing them to experience strange and disturbing stimuli, they weren’t electrocuting us, or embedding subliminal images in our cat macros. They are both online platforms that are used in long and drawn out ways with people precisely because of the emotional effects that their products have on us. We use them to reminisce with old friends, joke with current ones, and share political rants with people who are maybe starting to reconsider being our friends. We go on them to find romance and love, and… let’s just say, other associated activities.

Facebook and OkCupid change the emotions of their users, and that is why we use them. If they didn’t, then we would stop. They are in the business of delivering streams of stimuli to us, that have an impact on our emotional state. So do films, TV shows, recipe books, and airplane flights.

All of those companies constantly look at the way they deliver their services to try to make them better – well, the good ones do. And better, often means to change them in ways that affect our emotions. The only difference between an “experiment”, and “mucking about with it” here, is that experiments do it systematically instead of haphazardly. That’s generally a good thing, because it makes them more effective at it.

But was what these two companies did really just product testing, and did they have a duty to warn people before they participated?

In OkCupid’s case it clearly was. Their purpose in fibbing to people about how good a match their algorithm said they had with a partner was to see whether their algorithm worked. If people were just as happy dating those who were supposed to be terrible matches, as they were the ones who were supposed to be good matches, then what would suggest that their algorithm itself was worthless. They were clearly making sure their product worked.

The FaceBook case is less clear. Did it help them to bring a better service to their customers if they found out that there was a contagion effect of negative emotions? Perhaps a little bit, although one might argue that they were leveraging the kinds of changes that they already make anyway to their algorithm to find out more about topics of only tangential interest. If people consume FaceBook in order to feel certain emotions, then it would be of commercial interest for FaceBook to find out what produced those emotions in people, just the same as it would be of interest to makers of a horror movie to know if a planned movie would create fear in people.

If it were just for scientific interest, though, or even just the idle private curiosity of the owner, then that is more of a grey area. We would be upset if we thoughts a business were using its position of trust to find out uncomfortable personal information about us (i.e., if Google were reading the mail I send through it). On the other hand, it seems less objectionable that they draw from my interactions with them general lessons about humanity in general.  For instance, nobody got that upset when a porn website released reports showing that usage changes in cities depending whether the local sports team has just won or lost. The research FaceBook reported may or may not have helped improve their product, but it was fairly clearly non-personal learnings that were at least trying to learn something for the general benefit of humanity (whether or not they succeeded).

 

The remaining question, though, is whether you have to tell people first, before doing these sorts of tests. Generally it’s better if you do. As a principle, you want to be as transparent as possible about what you are doing. Of course, with tests of these sorts you can’t tell people exactly what you are looking for, right before you look, because that, of itself, would influence people in ways that mess up what you are trying to measure. People would be looking at their feeds thinking “why are they showing me this? How do they think that will make me feel? Am I supposed to be happy about that? I suppose I could be happy about it”, instead of reacting naturally. Telling people in much vaguer terms that experiments might happen would be generally good, though also generally less transparent, and so less useful to the individual users.

Perhaps the best way to handle it, though, might be to provide people a generalized opt in, of the kind that we are already familiar with on software that we run from our own devices. We are used to being asked to tick a box saying something like “let (whoever) upload anonymous usage statistics to our server, in order to improve our service”. Why not add another which says something like “We are constantly making small changes to our service in order to try to get it right. It is ok to share anonymous response patterns to these with scientists, to help them learn about human behaviour.”

Something like that might strike a balance between letting people opt in, and not bombarding them with dozens of pages of informed consent legaleze at any kind of a regular interval. The goal should be transparency without overload.