Where is the line between industry user research and academic human subjects research? And what rights do—or should—users have over how their (our) data is used? As user research becomes an established part of technology design, questions of research ethics become even more pressing. These issues came to the fore in the wake of Facebook’s recent controversy over a study of “emotional contagion” (Kramer et al. 2014) conducted by in-house researchers, namely Adam Kramer (no relation), with input from scholars at Cornell and UCSF, to test whether users’ moods can spread through what they see on their News Feeds.
The study has generated vociferous debate among user researchers, academics, and designers (for a good overview, start with The Atlantic’s coverage) over whether the study was ethical (such as this article at The Guardian), expressing serious misgivings about its potential harm. The British Psychological Society (BPS) officially labeled the study “socially irresponsible,” and even the scholarly journal in which it was published, PNAS, has issued an (admittedly murky) “statement of concern.” Still others point out that the methodology, determining mood based on snippets of text, was deeply flawed. These critiques have sparked a wave of pro-user-research apologists, claiming that on the contrary, suppressing such research would be unethical, and that the study could plausibly have passed more stringent IRB regulations, which already make it too difficult for academics to conduct the kind of research undertaken in corporate settings.
But much of this debate sidesteps a key issue social scientists have been contending with since at least Stanley Milgram’s studies of how far test subjects would go in delivering painful shocks to actors if an authority figure told them to—and that is, how to conduct research ethically. (read more...)