Distraction Free Reading

How (Not) to Talk about AI

Most CASTAC readers familiar with science and technology studies (STS) have probably had conversations with friends—especially friends who are scientists or engineers—that go something like this:  Your friend says that artificial intelligence (AI) is on its way, whether we want it or not.  Programs (or robots, take your pick) will be able to do a lot of tasks that, until now, have always needed humans.  You argue that it’s not so simple; that what we’re seeing is as much a triumph of re-arranging the world as it is of technological innovation. From your point of view, a world of ubiquitous software is being created; which draws on contingent, flexible, just-in-time, human labor; with pervasive interfaces between humans and programs that make one available to the other immediately. Your comments almost always get misinterpreted as a statement that the programs themselves are not really intelligent.  Is that what you believe, your friend asks?  How do you explain all those amazing robot videos then?  “No, no,” you admit, “I am not saying there’s no technological innovation, but it’s complicated, you know.”  Sometimes, at this point, it’s best to end the conversation and move on to other matters.

Will the real AI please stand up? The first video above shows the latest contraptions from the Google-owned Boston Dynamics.  The second describes the Hawk-Eye ball-tracking system that is used on the professional tennis tour. 

In a recent article at the Atlantic, Ian Bogost makes a compelling argument that too many programs—generic, mundane, doing-the-job-they-were-designed-for programs—are given the moniker “artificial intelligence.” If these are not AI, then, what software might qualify as authentic AI? Bogost’s colleague suggests one answer: to qualify as AI, programs must act the way programs do in the movies.  This means two things: first, that the program “must learn over time in response to changes in its environment.” Second, “what it learns to do should be interesting enough that it takes humans some effort to learn.” So, playing Chess or Go is AI; deciding if there is a cat in a picture is probably not. Bogost, though, suggests that what goes on under the label of “artificial intelligence” is too disparate to be grouped together under one banner. Demystifying Silicon Valley products (Facebook, Twitter) as merely/human-guided software implementations, he says, is important given how they often get framed as “totems of otherworldly AI.” Bogost argues that “in most cases … the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software” (my emphasis). He gives several examples of these software programs-masquerading-as-AI: Machine learning-driven text-processing programs to identify toxic comments; a tiny device that can be attached to the tennis net and uses computer vision techniques to signal when the ball is out; Coca-Cola’s bots that generate text to be used in advertisements; Twitter’s content filters that allow you to block other users and content. This elevation of what should be routine software into AI, he contends, is about corporate optics (pleasing investors) and lulling the public into thinking that these programs work as a sort of magic wand.

Bogost’s piece is well-argued and convincing, but stepping into the boundary wars over the definition of “artificial intelligence” can be tricky. When AI researchers (and today this includes people who label themselves machine learning researchers, data scientists, even statisticians) debate what AI really means, their purpose is clear: to legitimate particular programs of research. What agenda do we—as non-participants, yet interested bystanders—have in this debate, and how might it be best expressed through boundary work? STS researchers have argued that contemporary AI is best viewed as an assemblage that embodies a reconfigured version of human-machine relations where humans are constructed, through digital interfaces, as flexible inputs and/or supervisors of software programs that in turn perform a wide-variety of small-bore high-intensity computational tasks (involving primarily the processing of large amounts of data and computing statistical similarities). It is this reconfigured assemblage that promises to change our workplaces, rather than any specific technological advance. The STS agenda has been to concentrate on the human labor that makes this assemblage function, and to argue that it is precisely the invisibility of this labor that allows the technology to seem autonomous. And of course, STS scholars have argued that the particular AI assemblage under construction is disproportionately tilted towards benefiting Silicon Valley capitalists.

I interpret Bogost’s piece as an attempt to translate this rather complicated, non-reductionist position (ergo the conversation that opened this blog-post) into a clear-cut boundary position. And to be fair, this sort of demystification of AI seems much needed when, for example, the founder of Facebook writes in a sweeping manifesto that the need of the hour is to “build artificial intelligence to understand more quickly and accurately what is happening across our community.”  It’s important to point out, as Bogost does, that Mark Zuckerberg’s somewhat magical-sounding AI is essentially a text-processing program, built by Facebook’s computer scientists and engineers, trained on particular data-sets, that needs human supervision and is often contingent on human labor to fill in the gaps.

This move towards creating distinct boundaries, however, entails two risks. First, labeling AI as just software has a long history in the boundary wars fought between AI-boosters and AI-critics. When philosopher Hubert Dreyfus wrote his famous “Alchemy and Artificial Intelligence” paper for the RAND Corporation (later expanded into his book What Computers Can’t Do), he argued that AI was premised on the notion that when human beings did putatively “intelligent” things, they were processing information, a sort of “plan” that was worked out in their heads.  Unfortunately, Dreyfus argued, drawing on the work of post-foundationalist philosophers like Heidegger, Wittgenstein, and Merleau-Ponty, that human action could not be reduced to rule-following or information processing, and once AI systems were taken out of their toy “micro-worlds,” they would fail. For their part, AI researchers argued that critics like Dreyfus moved the “intelligence” goalposts when it suited them. When programs worked (as did the chess and checkers-playing programs in the 1960s and 1970s), the particular tasks they performed were just moved out of the realm of intelligence. Bogost’s article could invite a similar sort of retort from AI researchers; after all, building a robust computer vision system to recognize objects was really difficult thirty years ago, but is considered to be fairly simple today.

Second, painting many so-called AI systems as just software plays into another hegemonic discourse of Silicon Valley: the software program as tool. In this formulation, software programs are more or less open-ended “tools” that users can put to work for their own goals and purposes. Not surprisingly, the word “tool” appears in Zuckerberg’s manifesto 10 times (to compare, AI and “artificial intelligence” together occur 7 times, and “community” more than 100 times). For example, Zuckerberg writes: “our commitment is to continue improving our tools to give you the power to share your experience”; “we will work on building new tools that encourage thoughtful civic engagement,” etc. In Thomas Malaby’s study of Linden Labs, the architects of Second Life saw themselves as designing “tools” that Second Life residents could take up and put to their own uses. As a Linden employee put it in an email: “We are a tool, a platform. We are plastic: users mold us as they feel” (Malaby, 110).

The notion of the tool has a long history in software development and, and throughout the post-war era, it served to connect the internal technical developments in computing to the momentous social transformations outside. In the UNIX operating system, tools are “programs that do one thing, and do it well.” The philosophy of software design that co-constructed both “modular” software and the autonomous, enterprising software “user” characterizes these tools as able to be “glue[d] together in interesting ways” (Salus, 80).  Similarly, Stewart Brand’s Whole Earth Catalog provided “tools” for the counterculture to help them “head back to the land” (Turner, 5). It was partly through being framed as a “tool,” Fred Turner argues, that the computer came to be imbued with notions of freedom and liberty. It goes without saying that that unlike the “tools” built by the Cold War university labs or the free software movement, the tools of Silicon Valley are embedded in powerful logics of commerce.       

A critical analysis of our new software infrastructures then presents something of a quandary. Talking about artificial intelligence has the advantage that it can be an impetus to thinking about politics and these powerful logics. Even the most technologically hopeful AI researchers agree that the march of machine learning poses political problems such as the disappearance of good (i.e. well-paying yet stable) jobs. Of course, they attribute this to bad politics rather than the technology. Yet, focusing on AI risks masking the kinds of human labor that operate, and even constitute, the AI. The language of software or tools, on the other hand, is perhaps a better descriptor of the technology itself but Silicon Valley has made it into a politically neutral term; a term that emphasizes the autonomy and creativity of individuals while downplaying the powerful capitalist logics underneath.

Selected References:

Malaby, Thomas. Making Virtual Worlds: Linden Lab and Second Life. Cornell University Press, 2009.

Salus, Peter H. A Quarter Century of UNIX. Addison-Wesley Reading, 1994.

Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University Of Chicago Press, 2006.

Suchman, Lucy. Human-Machine Reconfigurations: Plans and Situated Actions. 2nd ed. Cambridge University Press, 2006.

3 Trackbacks

Leave a Reply

Your email address will not be published. Required fields are marked *