Tag: experts

The Rise of Citizen Science, Part I: A Principled Approach

This is the first in a two-part series about the rise of citizen science, from CASTAC Contributing Editor Todd Hanson. When it comes to science, Albert Einstein was an amateur. Well, at least he was during the time he made his most groundbreaking contributions to physics. From 1902 to 1908, Einstein’s day job was that of an assistant patent examiner at the Swiss Federal Office for Intellectual Property. It was during these six years as an avocational scientist that he developed his theories that transformed physics. Working as what we would today call a “citizen scientist,” the four papers he published would become a foundation of modern physics. While Einstein’s case may be unique, a lesson from his life is that ignoring the contributions of those scientists and scholars unaffiliated with university or research institutions is done at society’s risk. The bifurcation of scientists into professional and amateur is a relatively recent and arbitrary occurrence. Several notable eighteenth and nineteenth century “gentlemen scientists” had no direct affiliation to corporate or public institutions, including Robert Boyle, Henry Cavendish, and Charles Darwin, and were not paid scientists, or even science professors, for much or even all of their lives–but were nonetheless immensely important in the history of science. Historically, the divide increased as professional scientists were generally better educated in their fields and paid positions in universities and, later, corporations increased. Still, the general public interest in scientific matters was strong and although amateurs were rarely welcomed into science’s inner circles, they continued to work unpaid, and mostly unacknowledged on scientific matters. (read more...)

Trusting Experts: Can we reconcile STS and Social Psychology?

Numerous battles are being fought today within and across America’s political landscape, from global warming to the regulation of new technologies (e.g., GMOs, fracking). Science plays a big role in these debates, and as a result, social psychologists, political scientists, economists, and other social scientists have become interested in the question of why people (or rather, certain people) don’t accept scientific findings. These social scientists have converged on a concept called motivated reasoning: that because our reasoning powers are directed towards particular ends, we tend to pick facts that best fit our needs and motivations. Motivated reasoning, in this explanation, is a universal concept, perhaps a product of evolution; all human beings do it, including experts. It also raises the profoundly disturbing possibility of a scientific end to our Enlightenment hopes that experts—let alone publics—can be rational, that they can neatly separate facts from values and facilitate a harmonious society. Influential science journalists have now started drawing on those findings. Chris Mooney, who made a name for himself writing The Republican War on Science, drew on social psychological and brain imaging research on political bias in a well-cited Mother Jones piece, “The Science of Why We Don’t Believe Science: How our brains fool us on climate, creationism, and the vaccine-autism link.” Other political scientists have written about this in high-profile outlets, such as Brendan Nyhan for the New York Times. It has also made several appearances on The Monkey Cage, a political science blog that is now part of the Washington Post. (read more...)

Crowdsourcing the Expert

“Crowd” and “cloud” computing are exciting new technologies on the horizon, both for computer science types and also for us STS-types (science and technology studies, that is) who are interested in how different actors put them to (different) uses. Out of these, crowd computing is particularly interesting — as a technique that both improves artificial intelligence (AI) and operates to re-organize work and the workplace. In addition, as Lilly Irani shows, it also performs cultural work, producing the figure of the heroic problem-solving innovator. To this, I want to add a another point: might “human computation and crowdsourcing” (as its practitioners call it) be changing our widely-held ideas about experts and expertise? Here’s why. I’m puzzled by how crowdsourcing research both valorizes expertise while at the same time sets about replacing the expert with a combination of programs and (non-expert) humans. I’m even more puzzled by how crowd computing experts rarely specify the nature of their own expertise; if crowdsourcing is about replacing experts, then what exactly are these “human computation” experts themselves experts on? Any thoughts, readers? How might we think about the figure of the expert in crowd computing research, given the recent surge of public interest in new forms of — and indeed fears about — this thing called artificial intelligence? (read more...)