Distraction Free Reading

Crowdsourcing the Expert

Leviathan by Thomas Hobbes Licensed under Public Domain via Wikimedia Commons.

Leviathan by Thomas Hobbes Licensed under Public Domain via Wikimedia Commons.

“Crowd” and “cloud” computing are exciting new technologies on the horizon, both for computer science types and also for us STS-types (science and technology studies, that is) who are interested in how different actors put them to (different) uses. Out of these, crowd computing is particularly interesting — as a technique that both improves artificial intelligence (AI) and operates to re-organize work and the workplace. In addition, as Lilly Irani shows, it also performs cultural work, producing the figure of the heroic problem-solving innovator. To this, I want to add a another point: might “human computation and crowdsourcing” (as its practitioners call it) be changing our widely-held ideas about experts and expertise?

Here’s why. I’m puzzled by how crowdsourcing research both valorizes expertise while at the same time sets about replacing the expert with a combination of programs and (non-expert) humans. I’m even more puzzled by how crowd computing experts rarely specify the nature of their own expertise; if crowdsourcing is about replacing experts, then what exactly are these “human computation” experts themselves experts on? Any thoughts, readers? How might we think about the figure of the expert in crowd computing research, given the recent surge of public interest in new forms of — and indeed fears about — this thing called artificial intelligence?

In a lecture, Rob Miller, a leading human computation researcher at MIT, defines crowd computing as follows:

Crowd computing is taking a large group of people on the web, getting them to make small contributions, so no one person is doing the whole task, but they are making small contributions, coordinated by the software, that solves problems that we don’t know how to do with software by individual users. And this group of people can be brought together and motivated in many different ways: Wikipedia does really well by motivating volunteers, maybe, to make small contributions. There’s been a big growth in games with a purpose. […] Sometimes I think of Facebook as being a global face recognition problem being solved by crowds, right, for social reasons, they’re tagging their friends.

The definition of crowd computing here is quite broad: it isn’t just Mechanical Turk, but also Facebook and Wikipedia which fit into this. Theorists will quibble at this sweeping classification [1], but from a computer science system-builder perspective, it makes a lot of sense: it gives us a sense of how computer scientists might approach this as a research problem to come up with ideas for new kinds of human computation systems.

Another computer scientist told me that, for him, crowdsourcing was about breaking up a task into smaller tasks that anyone can do. He emphasized the “anyone” part; the implication seemed to be that the task had to be universal in some way, that is, so simple that anyone can do it, and complex enough that on aggregating such tasks, something useful and intelligent is produced. He also argued that while this has worked fairly well in the past (for labeling images, for transcribing audio), these simple micro-tasks cannot aggregate to more complicated tasks and that, at some point, crowdworkers themselves have to be trained into becoming experts, or the crowd itself has to be composed of experts.

Interesting, I thought, and set about trying to find examples. A quick Google search brought up a paper by Daniela Retelny et al. (2014) which introduces something the authors call “flash teams.” These are teams of experts that function as some sort of just-in-time organization, coordinated through software programs. The abstract is well worth reading in full:

We introduce flash teams, a framework for dynamically assembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accomplishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked modular tasks and handoffs that can be computationally managed. Interactive systems reason about and manipulate these teams’ structures: for example, flash teams can be recombined to form larger organizations and authored automatically in response to a user’s request. Flash teams can also hire more people elastically in reaction to task needs, and pipeline intermediate output to accelerate completion times. To enable flash teams, we present Foundry, an end-user authoring platform and runtime manager. Foundry allows users to author modular tasks, then manages teams through handoffs of intermediate work. We demonstrate that Foundry and flash teams enable crowdsourcing of a broad class of goals including design prototyping, course development, and film animation, in half the work time of traditional self-managed teams.

In this paper at least, crowdworkers are themselves experts. The system helps someone called the “end user”: it lets this putative end user specify inputs and outputs, and then the system helps to create and scaffold a team that collectively accomplishes the task. Or as the paper puts it:

End users can assemble a flash team on-demand by providing only the desired input and output. During runtime, flash teams are elastic: they grow and shrink to meet task demands by hiring additional members from the crowd. Finally, their work can be pipelined: when in-progress results are enough for downstream tasks to begin work, the system passes in-progress results along to accelerate completion times.

We embody this approach in Foundry, a platform for end users to create flash teams and a runtime environment to manage them. Foundry allows users to create flash teams by directly authoring each task or forking a team that other users authored and then recruit from the oDesk online labor marketplace [1]. For team members, Foundry takes on managerial responsibilities to walk team members through the workflow, notify them of any schedule shifts, scaffold the handoff process, and provide members with shared awareness [14]. For the end user, Foundry abstracts away low-level management effort while allowing for high-level oversight and guidance. [emphasis in bold added]

So the crowd platform here works as a kind of manager. What’s interesting is that this imagined manager is not, say, the shop-floor manager who supervises de-skilled workers (what most crowdsourced systems, like MTurk, may model themselves on, with continuous and punishing quality-control systems) but a manager supervising workers from the so-called “creative class.”

Karl Gottlieb von Windisch - Copper engraving from the book: Karl Gottlieb von Windisch, Briefe über den Schachspieler des Hrn. von Kempelen, nebst drei Kupferstichen die diese berühmte Maschine vorstellen. 1783. Original Uploader was Schaelss at 11:12, 7. Apr 2004. Licensed under Public Domain via Wikimedia Commons.

Copper engraving of a “Türkischen Schachspielers” (Turkish chess player) , constructed by Wolfgang von Kempelen, from Karl Gottlieb von Windisch, Briefe über den Schachspieler des Hrn. von Kempelen, nebst drei Kupferstichen die diese berühmte Maschine vorstellen. 1783. Licensed under Public Domain via Wikimedia Commons.

Another paper I found involved training the (non-expert, unskilled) crowdworkers (here Turkers) to become experts. Derrick Coetzee et al.’s 2015 paper suggests that techniques of “peer learning,” used in online classrooms, can be used to improve the performance of crowdworkers, especially given the right incentives. The authors go on:

Results suggest that looking more deeply into the learning literature may yield additional ways to improve crowd sourcing approaches and outcomes.

This is interesting too. While I have seen research before that takes learners to be crowds, this paper seems to suggest that crowd-workers might productively be thought of as learners.

What does one make of all this? It is one thing to create software frameworks to manage low-skill workers (a job that is itself seen as low-status), but software frameworks that manage the creative class? Surely this should lead to crowd computing researchers/companies to come into conflict with managers like Nathan Ensmenger’s “computer boys” in the 60s and 70s? Or does this research diffuse this threat by suggesting that now every “end user” can be a manager?

Finally, what is one to make of the crowd computing researchers who always seem to present themselves as neutral system-builders? What, precisely, are they experts on? Are they experts on building systems (which is often what comes through in their research papers)? Or are they experts on the organizational process and worker coordination (e.g., the Retelny et al. paper on flash teams above draws on organizational behavior literature)? Are they experts on expertise itself? Or are they just experts on crowds, sui generis? That crowd computing researchers can work on re-configuring worker-manager-user relations without quite specifying the nature of their own expertise might perhaps be their biggest achievement.

End Notes:

[1] This is also the question to which theorists like Jonathan Zittrain and Daren Brabham attend: how might we think of different kinds of crowdsouricng? Zittrain in his famous “Minds for Sale” lecture suggests that one way of thinking about these crowdsourcing platforms is to ask what kinds of skills they require of their workers: Innocentive, for e.g., asks for very expert users who are usually asked to complete one complicated task (like solving a chemistry problem), while something like Amazon Mechanical Turk asks its workers to fulfill relatively smaller tasks and these workers will often have no idea of how what they do fits into the larger task. Brabham complains, in his book Crowdsourcing, about how the term has been applied to practically anything that involves volunteers. He suggests that the word crowdsourcing is better applied to pursuits where a top-level organizational goal is meshed with bottom-up labor processes farmed out to a number of volunteers. Chris Kelty and his co-authors also have wonderful paper classifying the different forms of participation on the Internet.

Editor’s Note: This post was edited on Feb. 24, 2015 to add citation dates for two papers.

2 Comments

1 Trackback

Leave a Reply to MT Cancel reply

Your email address will not be published. Required fields are marked *