Where is the line between industry user research and academic human subjects research? And what rights do—or should—users have over how their (our) data is used? As user research becomes an established part of technology design, questions of research ethics become even more pressing. These issues came to the fore in the wake of Facebook’s recent controversy over a study of “emotional contagion” (Kramer et al. 2014) conducted by in-house researchers, namely Adam Kramer (no relation), with input from scholars at Cornell and UCSF, to test whether users’ moods can spread through what they see on their News Feeds.
The study has generated vociferous debate among user researchers, academics, and designers (for a good overview, start with The Atlantic’s coverage) over whether the study was ethical (such as this article at The Guardian), expressing serious misgivings about its potential harm. The British Psychological Society (BPS) officially labeled the study “socially irresponsible,” and even the scholarly journal in which it was published, PNAS, has issued an (admittedly murky) “statement of concern.” Still others point out that the methodology, determining mood based on snippets of text, was deeply flawed. These critiques have sparked a wave of pro-user-research apologists, claiming that on the contrary, suppressing such research would be unethical, and that the study could plausibly have passed more stringent IRB regulations, which already make it too difficult for academics to conduct the kind of research undertaken in corporate settings.
But much of this debate sidesteps a key issue social scientists have been contending with since at least Stanley Milgram’s studies of how far test subjects would go in delivering painful shocks to actors if an authority figure told them to—and that is, how to conduct research ethically. This includes determining what constitutes informed consent and taking into account the needs and interests of research subjects. The obligation to conduct research ethically falls on us as researchers, above and beyond regulations and guidelines, and in communication with people we study.
ethnographic ethics beyond standard regulations
I encountered numerous ethical issues in my own ethnographic research on Facebook, studying social media among young people in Berlin from 2009-2010. I lived in close proximity to people whose practices I was studying and who had agreed to be interviewed and observed. As many ethnographers have found, people quickly forget about the anthropologist in the room, even when they have consented to participate. On one occasion, for example, a close friend was complaining about an acquaintance who was messaging him repeatedly on Facebook, about something he did not want to discuss with her. I’m not going to go into further details here, for the same reasons I haven’t described the incident in my published work—because I was, and am, concerned that I have access to intimate aspects of my subjects’ (who were also my friends) personal lives. This particular instance did not contribute meaningfully to questions I was pursuing about the changing relationship between digital media and experiences of place, so I did not analyze it further. But to be thorough, critically engaged research must grapple with thorny ethical dilemmas, such as when to take note of illicit activities and whether to record information that could cause harm if shared or compromised—from minor embarrassment to more serious bodily or economic injury. I do sometimes share stories that may be awkward or embarrassing, but when I do, I take extra steps to conceal my participants’ identities. Many even consented to publishing their real names, yet I always use pseudonyms and on occasion create composite characters to illustrate particular stories and findings without exposing people unnecessarily.
Some of these considerations are specific to ethnographic research and may not apply to large-scale quantitative data collection and analysis–and, in fact, may not be covered by institutional review at all (which is federally mandated but implemented by individual university review boards). I was not even beholden by my IRB protocol (that is, the description of my research activities and consent procedures approved by UC Irvine’s review board) to conceal participants’ names if they consented to their use. In the example above, I was well within my IRB parameters to write about fraught interpersonal encounters that took place on social media, as long as I took appropriate steps to mask individual identities. This approach often makes sense for quantitative research, where one can strip away identifying details and still describe larger trends in a body of data. But in granular ethnographic description, it becomes difficult to tell a layered, contextualized story without maintaining specifics that allow subjects to recognize themselves—and their friends, and their actions, potentially with negative consequences. What to do?
In my case, I make judgment calls about which stories are necessary for making key analytical points, and which are not worth the risk, above and beyond the requirements of institutional review. I must also assess how my research participants can evaluate the work I’m doing and what consent entails—although it was clear I was studying social media activities in Berlin, I could not assume everyone was familiar with anthropological research methods, which seek to situate activities in broader social and cultural contexts. Such contexts can include encounters and practices in people’s homes and other spaces deemed private. The American Anthropological Association’s statement on ethical guidelines, “Principles of Professional Responsibility,” concerns itself primarily with ethical, rather than moral or legal, considerations, on the premise that ethics must be considered separately from other domains: “while moral, political, legal and regulatory issues are often important to anthropological practice and the discipline, they are not specifically considered here. These principles address ethical concerns.” The first ethical obligation for anthropologists in this document is “to do no harm,” and to conduct advocacy or critique in dialogue with research participants, rather than determining their interests for them:
As with all anthropological work, determinations regarding what is in the best interests of others or what kinds of efforts are appropriate to increase well-being are value-laden and should reflect sustained discussion with others concerned.
On the subject of informed consent and the importance of transparency, AAA’s statement says:
Researchers who mislead participants about the nature of the research and/or its sponsors; who omit significant information that might bear on a participant’s decision to engage in the research; or who otherwise engage in clandestine or secretive research that manipulates or deceives research participants about the sponsorship, purpose, goals or implications of the research, do not satisfy ethical requirements for openness, honesty, transparency and fully informed consent. (emphasis added)
taking research ethics seriously
The Facebook study did not inform users that they were conducting a psychological experiment, that the emotional tone of their News Feed was being manipulated, nor were they provided any information after the fact that such a study had been conducted, and therefore deprived of any opportunity to ask questions or seek support if they did experience unintended harm. Whether this particular research protocol differs meaningfully from ways social media companies manipulate user experience all the time or not, it clearly constitutes “research” in the federally defined sense (“a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge” ), with human subjects (“a living individual about whom an investigator [whether professional or student] conducting research obtains  Data through intervention or interaction with the individual, or  Identifiable private information”). I think, from the response the study has garnered, most Facebook users accept that companies collect anonymous data from them to “improve” their experience (that is, make it more profitable), but do not implicitly consent to be treated as experimental subjects. Facebook’s researchers were not simply collecting user data to increase “engagement,” that is, spending time on the site, clicking through links, and so forth, but were seeking to contribute to generalizable knowledge about transferring emotional states unconsciously and “outside in-person interaction” (Kramer et al. 2014). This required intervening in individual Facebook experiences to test their hypothesis.
Still, some are arguing that Facebook (and other tech companies) should not be castigated for conducting such experiments on users, at the risk they will keep doing so but keep the results to themselves. Even AAA’s ethical guidelines indicate that a key element of research ethics entails disseminating findings. But the question is not whether Facebook should or should not study user behavior—or even, whether they should conduct deceptive psychological experiments. The question remains how to conduct user research ethically as new kinds of data are becoming available, often at a massive scale. When technology companies create products that more and more of us use daily as part of our routine social and professional lives, we—as users and researchers—can and should demand that new forms of research take these ethical obligations seriously.