Distraction Free Reading

Trusting Experts: Can we reconcile STS and Social Psychology?

Photograph of a protest sign being held up that says "Fracking = Death" with a dead bird and fracking tower above it.

Image: A protest against fracking. Picture taken from here. CC BY-SA 2.0.

Numerous battles are being fought today within and across America’s political landscape, from global warming to the regulation of new technologies (e.g., GMOs, fracking). Science plays a big role in these debates, and as a result, social psychologists, political scientists, economists, and other social scientists have become interested in the question of why people (or rather, certain people) don’t accept scientific findings. These social scientists have converged on a concept called motivated reasoning: that because our reasoning powers are directed towards particular ends, we tend to pick facts that best fit our needs and motivations. Motivated reasoning, in this explanation, is a universal concept, perhaps a product of evolution; all human beings do it, including experts. It also raises the profoundly disturbing possibility of a scientific end to our Enlightenment hopes that experts—let alone publics—can be rational, that they can neatly separate facts from values and facilitate a harmonious society.

Influential science journalists have now started drawing on those findings. Chris Mooney, who made a name for himself writing The Republican War on Science, drew on social psychological and brain imaging research on political bias in a well-cited Mother Jones piece, “The Science of Why We Don’t Believe Science: How our brains fool us on climate, creationism, and the vaccine-autism link.” Other political scientists have written about this in high-profile outlets, such as Brendan Nyhan for the New York Times. It has also made several appearances on The Monkey Cage, a political science blog that is now part of the Washington Post.

While social psychologists, and science journalists, are obsessed with why some people don’t believe science, their answer—motivated reasoning—is notably at odds with research in STS. An STS scholar like Brian Wynne (1996, 2002) would explain the disagreements between experts and laypeople differently. Wynne has argued persuasively that the disconnect between experts and publics is not about public ignorance. It is also, he argues, not just about particular decisions (should we build this nuclear plant? Should this drug be approved?) that need to be made, or propositions that need warrant (how harmful is this chemical?). Rather, the publics that seem to be resisting science are resisting “the presumptive imposition” of these piecemeal questions on what, to them, are central questions about meaning and social life. In other words, the publics that resist science are resisting precisely the reduction of science to propositional questions about facts and truth, which sometimes hide how science, as a cultural form, is being used to guide both “nature and society.”

At the same time though, motivated reasoning seems like a fairly benign concept to me, as it suggests that “data” are always interpreted in the light of previously held beliefs; that is, facts and values are not easily separable in practice. This is by no means a bad place to be in.

What the experiments (don’t) show

And yet, there are times when the concept gets applied to some strange ends. In an article in the Washington Post, for example, Chris Mooney reports on a recent experiment from law professor-turned-psychologist Dan Kahan to argue that yes, as a rule, you should trust experts more than ordinary people. Kahan and his collaborators asked the subjects in their pool (judges, lawyers, lawyers-in-training and laypeople, statistically representative of Americans) to interpret whether a given law was applicable to a hypothetical incident. The question was: would they apply the rules of statutory interpretation correctly? First, they were informed about the law that bans littering in a wildlife preserve. Next they were told that a group of people had left litter behind, in this case, reusable water containers. But there was a catch: some were told that these were left behind by immigration workers helping illegal immigrants cross over the US-Mexico border safely. Others were told that the litter belonged to a construction crew building a border fence. All were polled to understand their political and ideological affiliations.

Predictably, depending on their ideological beliefs, people came down on different sides of the issue: Republicans tended to be more forgiving of the construction workers but less so of the immigration workers whose actions they disagreed with. Democrats displayed opposite tendencies, and so forth. What was different was that judges and trained lawyers tended, more than laypeople and law students, to avoid this bias. They interpreted the law correctly (the correct answer being that there was no littering because the water bottles were reusable) despite their ideological convictions. Well, so far so good. I interpret the experiment to be saying that lawyers are subjected to special institutional training, unlike the rest of us, and that this habitus lets them reach the “correct” result far more frequently. Experts are different from the rest of us, in some way.

But what’s interesting is the conclusion that Mooney (with some warrant from Kahan) draws from this experiment: while experts are biased, they are less biased than laypeople, and that therefore, experts should be trusted more often than not. Well. Scientific American‘s John Horgan’s pragmatic take on this is that, of course, it leaves open the question of which experts to trust, especially when they disagree. And besides, to trust experts because they are experts, seems against the spirit of a democratic society. Within STS too, while Naomi Oreskes and Paul Edwards (2010) have argued that it’s time for us to trust climate scientists, they frame their reasons as pragmatic choices, drawing extensively on the history of these scientific communities.

Reconciling STS and Social Psychology

How then to reconcile the Kahan study with findings in STS and the anthropology of expertise about the social construction of expert knowledge, without making such a blanket case for or against expertise? Here’s one way. Let’s take the topic of Kahan’s study: statutory interpretation. Just three months ago, the US Supreme Court handed down a decision in King vs. Burwell, a challenge to President Obama’s health care law (the Affordable Care Act, or ACA) that hinged precisely on that question. The law says that federal subsidies will be available for those who register for health insurance exchanges “established by the state.” It so happens that many “states” in the United States with Republican governments refused to establish these exchanges, so the federal government stepped in to do it. The Internal Revenue Service (IRS) ruled that the law meant that subsidies were available for all exchanges, irrespective of who set them up. But opponents of the law disagreed and said the law meant that subsidies were only available if a state, not the federal government, had set up the exchange. And while it seemed very clear in the long battle over the ACA that the subsidies were meant for everyone, it was a controversial enough piece of legislation (journalist David Leonhardt has called it the “federal government’s biggest attack on inequality”) that lower courts disagreed with each other and the Supreme Court took it up. It ruled 6-3 (but by no means unanimously, and with some stinging asides from Justice Scalia about the stupidity of the majority) that the IRS’ interpretation of the ACA was correct. But along the way, it left many people worried—especially after hearing the arguments—that the ACA’s end was near.

Now the Supreme Court judges are certainly experts: elite, well-trained, and at the top of their respective games (perhaps even more so than the judges in the Kahan study). Yet, here they were, right at the center of a storm, disagreeing strenuously about how the law was to be interpreted. You might think their expert training would allow them to come to a unanimous conclusion. So, how to reconcile their disagreements without undercutting their expertise on law? I think there’s a way. Experts are conditioned to think in certain ways, by virtue of their institutional training and practice, and when stakes are fairly low, they do. But once stakes are high enough, things change. What might seem like a fairly regular problem in ordinary times, a mere question of technicality, may not look like one in times of crisis. At this point, regular expert-thinking breaks down and things become a little more contested.

And Americans do live in a polarized time. As political scientists have shown time and time again, the polity of the United States experienced a realignment after the Civil Rights movement. The two major parties no longer have substantial overlap, if any, catering to entirely different constituencies: Republicans are the party of managers, evangelicals, and the white working class; Democrats of organized labor, affluent professionals and minorities. Political polarization has meant that even institutions which are supposed to be non-political (but really have never been so) start to look more and more political, because there are basic questions of disagreement over things that may have seemed really simple and technical. This explains the spate of decisions from the Supreme Court where the conservatives and liberals on the bench have split neatly along ideological lines.

But does that mean that judges are just politicians in robes (which is the thesis that Dan Kahan and others set out to debunk)? Not really. The US Supreme Court actually resolves a great many cases with fairly clear majorities; more than a third of them through unanimous decisions. These cases hardly ever make it into the public eye, and they involve, to us at least, what seem like arcane questions of regulation and jurisdiction (often questions of statutory interpretation). Another way to interpret this is to say that these cases are “technical” because they are not in the public eye and no great stakes attach to them, unless to the parties in question. When stakes are high—the ACA, campaign finance funding, abortion, gay marriage—the Supreme Court, like the rest of the country, is hopelessly polarized. And a good thing, too, because fundamental crises in values are best addressed through Politics (with a capital P) rather than to bodies of experts.

Mooney takes a somewhat gratuitous swipe at STS as the discipline that is interested in “undermining the concept of scientific expertise.” But that seems to misunderstand the point of science studies. These studies weren’t meant to show that experts are “biased.” They were meant to show that expert practices and discourse are designed to construct certain questions as “technical.” This is not necessarily a bad thing, because technical problems have technical solutions, but it does, at certain points, drown out other voices that might disagree with experts’ (technical) conclusions. What is more, once experts are framed as objective and arguing only from facts rather than values, opposing voices, which have no recourse to the language of facts, are delegitimized even further. What STS scholars argue is not that you need to trust experts more or less, but that public debates cannot be satisfactorily resolved by a partitioning where one group of people (experts) are entrusted with matters of fact, and everyone else with matters of value. In other words, values and facts needed to be debated together by both experts and laypeople (Jasanoff 2003). And this is the approach that’s missing in the current conversation.

[This is a modified version of a post that first appeared on my blog.]

References

Wynne, Brian. 2003. “Seasick on the third wave? Subverting the hegemony of propositionalism: Response to Collins & Evans (2002).” Social Studies of Science: 401-417.

Wynne, Brian. 1996. “May the sheep safely graze? A reflexive view of the expert/lay knowledge divide.” In Risk, Environment and Modernity: Towards a New Ecology, ed. S. Lash, B. Szerzynski, and B. Wynne. London: Sage, pp. 45-83.

Jasanoff, Sheila. 2003. “Breaking the waves in science studies: Comment on H. M. Collins and Robert Evans,’The third wave of science studies’.” Social Studies of Science: 389-400.

Oreskes, Naomi, and Erik M. Conway. 2011. Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Publishing USA.

Edwards, Paul N. 2010. A Vast Machine: Computer models, climate data, and the politics of global warming. MIT Press.

2 Comments

  • Nick Seaver says:

    I’m curious what you think about the “third wave” STS work (really, just Harry Collins and Robert Evans) on expertise that Wynne and Jasanoff are responding to in the articles you cite: http://sss.sagepub.com/content/32/2/235 http://www.press.uchicago.edu/ucp/books/book/chicago/R/bo5485769.html

  • Hi Nick! I have mixed feelings about the Collins-Evans work. On the one hand, I love their “interactional expertise” concept. I think it’s helped me understand how experts relate to each other in workplaces and also how bodily expertise can co-exist with codified propositional knowledge (it is mediated, it seems to me, by people like coaches and teachers whose jobs are all about interactional expertise). I also sympathize with their desire to make more of a policy impact. A lot of STS work has this critical-theory tone and I can see where they are coming from.

    However the idea that we, as sociologists of science, are experts on expertise, and policy-makers should just turn to us when they want to know which experts are relevant, makes me uncomfortable. Collins and Evans want to make the question of what expertise is relevant, which a lot of STS work understands as a political question, into a technical question. I guess that’s one way to intervene in political debates – and it’s not like I know of a better one – but it leads to the separate question of _how_ to incorporate sociologists into a political mechanism to determine relevant expertise. I don’t think they get into that.

    Their contention that sociologists can help determine what expertise is relevant for a certain task/job/question also doesn’t accord with my own experience as an ethnographer or participant-observer. I certainly acquired interactional expertise when I hung around software engineers and computer scientists building learning infrastructures. But I’m not sure if I can adjudicate questions about who was right or wrong, or which technique is better for a certain job. (Not that anyone’s asking, and building software is perhaps not the same as studying long-term climate change.)

    I guess one test of the Collins-Evans approach I can think of would be if they could give an example of when _they_ changed their mind from distrusting a group of experts to trusting them by reading a sociologists’ account of those experts and their history/practices. On a lot of the examples they give, most of us already trust those experts. But maybe they do give such an example; perhaps I need to go back and look at the book again.

    Anyway, I’ve rambled on for a while. I’m curious what you and others think of the third-wave work!

2 Trackbacks

Leave a Reply

Your email address will not be published. Required fields are marked *