I call myself a scholar of information, communication, and technology with a view toward influencing law and policy. To that end, my motto over that last few years has been “Social Science matters!” And by that, I really mean that qualitative research, or research aimed at understanding how people and organizations actually use technology, is important for creating good law. To this end, ethnographic study, the kind that produces thick descriptions of people and culture, should be MO of any body tasked with writing regulations.
Recently I was asked to participate in training a group of telecommunications regulators who want to conduct a regulatory impact assessment (RIA). A RIA is a thorough investigation of the possible impacts of a proposed or revised regulation. In the most basic sense, the investigation is used to forecast whether the new rule will achieve what it’s supposed to, and what else could happen. Countries around the world use RIAs to evaluate regulatory needs and possible interventions. US federal agencies have been required to conduct and submit RIAs since the early 1980s, and President Bill Clinton codified this requirement in 1993 with Executive Order 12866. A second executive order, 13563, requires that agencies use “the best available techniques to quantify anticipated present and future benefits and costs as accurately as possible.”
A cost-benefit analysis is, perhaps, the key section of the report. Cost-benefit analyses quantify and monetize the possible and probable outcomes and impacts of an intervention, as well as examines the alternatives. If quantitative data is not available for assessing costs and benefits, US policy suggests that they be described and evaluated qualitatively. Qualitative data can help bulk up an agency’s assessment of an issue. In testimony before the Senate Committee on Commerce, Science and Transportation in 2012, then Federal Trade Commission Chair Jon Liebowitz had to answer questions about the “thinness” of the FTC’s cost-benefit analysis section of its online privacy regulation RIA. Although acknowledging the the lack of detailed cost-benefit analysis in the Commission’s report, Liebowitz asserted that with online privacy, the costs and benefits were difficult to quantify. It is the reliance on quantification by regulators that is at the heart of my trouble with traditional RIAs. It points to a larger issue with assessing law, particularly when the law has to do with technology.
As a baseline issue, law is normative. It prescribes how people and organizations should act and punishes infractions. Anthropologist Edward Hoebel defined law as having four functions: to identify behavior acceptable to society, to grant authority, to settle disputes, and to redefine relationships.[1] These functions assume, however, that people act rationally and always follow or acknowledge power, social norms, or inherent risk.
Raise your hand if you know that people don’t always act in the way that is predictable or possibly imagined.
I think this is particularly so with the use of technology. Too much of the time, creators and regulators have in their minds how people will and should use technology. The problem is that based on culture, experience, and individual values, among other things, people may use technology in unpredictable ways. I try to hammer this home to my technology and society students by having them, for each communications technology we study in class, examine the differences between how its creator wanted or thought it would be used and how people actually used it.
Such an approach, in my opinion, is necessary with laws and regulations, particularly those affecting the use and adoption of technology. Although tools like RIAs examine the costs and benefits of an agency action, unless this analysis features thick description, the kind of description found by conducting qualitative research, it is missing an important examination of the possible effects on people and organizations.
So when I was asked to create portions of the training workshop for the telecommunications regulators, I asserted the need for training in qualitative research methods. To this end, the segments of the workshop that I lead focused on the usual suspects of observation, focus groups, and in-depth interviews. Of course I’ll also talk about survey methods. But the focus of my trainings was aimed at generating the thick qualitative descriptions of the how people use the technology that may be regulated. It is in generating these descriptions that regulators will have a better understanding of the possible impacts of the laws.
Endnote
[1] Hoebel, E Adamson. The law of primitive man: A study in comparative legal dynamics. Harvard University Press, 2009.
2 Comments
Exciting! Thanks for the post, Jasmine. I am curious: how did the regulators respond to this? I’d love to hear your thoughts on that. (Or maybe it could be a follow-up blogpost!)
The reason I ask is that I wonder if some of the same tensions that crop up in the “fieldwork for design” paradigm also crop up here in the “fieldwork for regulation” framework. As Lucy Suchman and many others in the CSCW community have written, there’s always been a tension between how the process of design proceeds (fast, rapid, with quick iterations), and how fieldworkers conceive of ethnography (long, slow, something that usually comes to a slow boil). Which means that designers and fieldworkers have to come up with new processes if they collaborate. Did your regulators, for instance, mention that they had to work within a quick time-frame, and that ethnography might not be practical (hence quantitative risk analysis)?
Another aspect that I’ve heard about in a recent talk was that policy-makers sometimes tell researchers who are working in a more interpretive tradition that their work is “too academic.” (Not that quantitative researchers don’t get the same response. Still.)
Hi Shreeharsh, Thanks for the comment.
The regulators I worked with responded enthusiastically to learning about how using qualitative methods could assist them. Assuming that a full ethnographic study would not be practical for them, the methods learned–observation, in depth interviews, focus groups–could still prove useful in their decision-making.
And no one complained about qual methods being too academic. I think this may be because I tried to use examples and activities that demonstrated the utility of the different methods.