Category: Research

Let’s Think about the University: Anthropology, Data Science, and the Function of Critique

There have been surprisingly few sustained, ethnographic studies of the university that aim to understand it as an institution devoted at once to the production of knowledge and technologies, the circulation of those products, and the cultivation of particular types of subjects. Ethnographers have largely worked at it piecemeal, with admittedly excellent work from both the anthropology of education and of science carving out various areas of inquiry: classrooms, laboratories, admissions offices, student groups, start-up incubators. To my mind, it seems that the lack of a synthetic approach to the knowledge work going on in the university might be due to the disappointing fact that these two camps within anthropology don’t talk to each other very much. In part, this is a result of their different goals, positions within the ecology of anthropological knowledge production, possible sources of research funding, and available career paths both within and without academia; yet, despite the sociological intelligibility of this lack of communication, it remains intellectually unfortunate. As the business of research and education becomes increasingly corporatized, increasingly shaped by wider forms of rationality that rely upon quantification, standardization, and the devolution of responsibility to the individual, it becomes correspondingly urgent to develop a rigorous, holistic understanding of the university as such. This has only been underscored by my fieldwork among Russian data scientists, who are themselves involved in the ongoing reorganization of higher education here. That is to say, the neoliberal university qua institution, with its own internal forms of organization and expertise as well as its place within the broader political economy, deserves to be the object of a newly shared inquiry. The current shape of the university has profound implications for the professional lives of anthropologists of both science and education, and similarly thorough-going epistemological consequences for their ongoing, ultimately complementary attempts to understand how contemporary people make knowledge. I’m working through the latter half of this proposition in my current research project. Data science has emerged as a key site of intervention into the educational system in Russia; elites from both industry and academy are working together to modernize and re-purpose Russia’s formidable pedagogical infrastructure in pure mathematics and theoretical computer science to train a new generation of algorithmists, developers, and programmers in both the practical skills and professional attitudes that they see as necessary for the creation of a truly Russian knowledge economy. The result has been both the creation of a number of hybrid, industrial-academic institutions and wide-ranging modifications to curriculum and requirements at more traditional institutions. These changes are occurring within a broader context of profound reforms to post-graduate education1 and the science system more generally.2 (read more...)

How influential was Alan Turing? The tangled invention of computing (and its historiography)

Alan Turing was involved in some of the most important developments of the twentieth century: he invented the abstraction now called the Universal Turing Machine that every undergraduate computer science major learns in college; he was involved in the great British Enigma code-breaking effort that deserves at least some credit for the Allied victory in World War II, and last, but not the least, while working on building early digital computers post-Enigma, he described — in a fascinating philosophical paper that continues to puzzle and excite to this day — the thing we now call the Turing Test for artificial intelligence. His career was ultimately cut short, however, after he was convicted in Britain of “gross indecency” (in effect for being gay), and two years later was found dead in an apparent suicide. The celebrations of Turing’s birth centenary began three years ago in 2012. As a result, far, far more people now know about him than perhaps ever before. 2014 was probably the climax, since nothing is as consecrating as having an A-list Hollywood movie based on your life: a film with big-name actors that garners cultural prestige, decent press, and of course, an Academy Award. I highly recommend Christian Caryl’s review of the The Imitation Game (which covers Turing’s work in breaking the Enigma code). The film is so in thrall to the Cult of the Genius that it adopts a strategy not so much of humanizing Turing or giving us a glimpse of his life, but of co-opting the audience into feeling superior to the antediluvian, backward, not to mention homophobic, Establishment (here mostly represented by Tywin Lannister, I’m sorry, Commander Denniston). Every collective achievement, every breakthrough, every strategy, is credited to Turing, and to Turing alone. One scene from the film should give you a flavor of this: as his colleagues potter around trying to work out the Enigma encryption on pieces of paper, Turing, in a separate room all by himself, is shown to be building a Bombe (a massive, complicated, machine!) alone with his bare hands armed with a screwdriver! The movie embodies a contradiction that one can also find in Turing’s life and work. On one hand, his work was enormously influential after his death: every computer science undergrad learns about the Turing Machine, and the lifetime achievement award of the premier organization of computer scientists is called the Turing Award. But on the other, he was relatively unknown while he lived (relatively being a key word here, since he studied at Cambridge and Princeton and crossed paths with minds ranging from Wittgenstein to John Von Neumann). Perhaps in an effort to change this, the movie (like many of his recent commemorations) goes all out in the opposite direction: it credits Turing with every single collective achievement, from being responsible for the entirety of the British code-breaking effort to inventing the modern computer and computer science. (read more...)

The Entrepreneurial Future

Georges Doriot, who founded the first publicly traded venture capital firm in 1946, arguably announced a new regime of speculative capital when he said: “I want money to do things that have never been done before” (Ante 2008). In the years immediately after World War II, the establishment of venture capital firms was crucial to the ascent of a new kind of commercial enterprise, one that has profoundly influenced the development of digital technologies on a very broad scale. It was with the creation of the first venture capital firms that a financial network to support technology startup companies began to form. The fact that the earliest Silicon Valley startups were funded by venture capital investments is an indicator of the degree to which the developmental trajectory of personal computing has been intertwined with that of finance capital. Fairchild Semiconductor, for example, was the first startup funded by venture capital (in 1957), and it launched numerous “spin-off” companies that were collectively responsible for the innovations that enabled what became the microelectronics industry. Since then, of course, venture capital has grown into a powerful industry that directs vast financial resources into technology startup companies. But venture capital investment doesn’t only fuel the tech startup economy — it actively shapes it. Research on Silicon Valley’s high tech industry suggests that venture capitalists’ importance to processes of innovation has more to do with their role in selecting promising companies than with simply providing financing itself (Ferrary & Granovetter 2009). Beyond choosing the criteria for valuation by which the potential commercial success of startups is measured, they determine which innovations will even have a chance to enter the market. The result is that Silicon Valley innovation is guided directly by finance capital’s future-oriented logic of speculation. Companies with few tangible assets pursue funding without which they will have little chance to successfully launch their products and, as business news coverage attests, some companies are valued at billions of dollars without demonstrating that they have the means to become profitable. What matters is keeping the possibility of a market open. One could think, for example, of Snapchat, a popular photo and video sharing app that has expanded through significant venture capital investments. Last year, the company was valued at $10 billion, despite the fact that it generates almost no revenue, mostly on the basis of its potential to reach users and create an audience. (read more...)

Crowdsourcing the Expert

“Crowd” and “cloud” computing are exciting new technologies on the horizon, both for computer science types and also for us STS-types (science and technology studies, that is) who are interested in how different actors put them to (different) uses. Out of these, crowd computing is particularly interesting — as a technique that both improves artificial intelligence (AI) and operates to re-organize work and the workplace. In addition, as Lilly Irani shows, it also performs cultural work, producing the figure of the heroic problem-solving innovator. To this, I want to add a another point: might “human computation and crowdsourcing” (as its practitioners call it) be changing our widely-held ideas about experts and expertise? Here’s why. I’m puzzled by how crowdsourcing research both valorizes expertise while at the same time sets about replacing the expert with a combination of programs and (non-expert) humans. I’m even more puzzled by how crowd computing experts rarely specify the nature of their own expertise; if crowdsourcing is about replacing experts, then what exactly are these “human computation” experts themselves experts on? Any thoughts, readers? How might we think about the figure of the expert in crowd computing research, given the recent surge of public interest in new forms of — and indeed fears about — this thing called artificial intelligence? (read more...)

Country in the Cloud

We are accustomed to think of the “cloud” as a place-less, formless mass of data floating “out there.” It has even been argued that new computer technologies and the movement of companies’ data “to the cloud” might so transform our inherited notions of time, space, and power that it could mean the end of history, geography, and power. The case of “e-Estonia,” however, challenges this notion: Estonia is a country which, unlike people and companies going “to the cloud,” hopes to actually move itself “into the cloud,” with profound implications for how we understand both the cloud metaphor and geopolitics in the digital age. e-Estonia Estonia is a small former Soviet Republic in northern Europe, with a territory of only 45 thousand square kilometers and population of just 1.3 million. Since the collapse of the Soviet Union in 1991, it has made a number of moves towards building a digital state, or, as it is often referred to, an “e-Estonia.” As a Research Fellow with the Centre for Science and Technology Studies of the European University at St. Petersburg, I have been studying how with e-Estonia the “the cloud” actually becomes a new type of space, the contours of which affect other concrete spaces and feed into a new type of nation-building project. (read more...)

Developer’s Dilemma and Making as Privilege

A book I wrote, Developer’s Dilemma , was recently published by MIT Press. It is an ethnography that explores the secretive everyday worlds of game developers working in the global videogame industry. There is an excerpt of the book over at Culture Digitally if you’re interested in checking out some of the words prior to making a commitment to the rest of the text. But I didn’t really want to start this year off just plugging my book. I mean, I did plug it. Just then. You should check it out. But that isn’t the point of this post. I recently Skyped into Tom Boellstorff’s graduate seminar to discuss the book. One of the questions they asked me had to do with “game talk” and if I thought game talk had to do more with boundary policing than it had to do with actually having real utility and functionality. Game talk, in essence, is the use of game names as a shorthand means by which to reference the rather complex mechanics and ideas that set certain games apart. It was a wonderful question, because in the book I write: (read more...)

Technology and Religion: An Interview with Michael Sacasas of The Frailest Thing (Part 2)

(Michael Sacasas is a PhD candidate in the “Texts and Technology” program at The University of Central Florida. He blogs about technology at The Frailest Thing. This post follows on our conversation from earlier in the year which touched on some of the foundational work on the relationship between western religion and technology.) I am glad you brought up Nye’s pessimism over the consumer sublime and his consternation over the potential drying of the technological well. Nye wrote of the consumer sublime, as embodied by Las Vegas, as a “rush of simulations” and as marking a change from a technological sublime emphasizing production, particularly in the sense of new knowledge, to one concerned solely with consumption. How do you see the relation between simulation and technological production? Do you think Nye’s pessimism is warranted? Timely question. There’s been more than a little angst of late about technological stagnation, much of it recently associated with PayPal founder Peter Thiel. For the past few years, Thiel has been warning about trends which, in his estimation, suggest that technological innovation may have stalled out over the last thirty or so years. We were promised flying cars, he is fond of saying, and we got 140 characters instead (a passing shot at Twitter, of course). (read more...)

AAA 2014 STS Recap

This year’s American Anthropological Association Meeting saw a number of active CASTAC and STS-inspired panels, many of which featured scholars from our own community. We discussed engaging with the Anthropocene, which is becoming a hot new topic, perhaps replacing the ontological focus from last year. Panelists explored whether this term is the “gift” that Latour proposes. The meeting also saw fascinating explorations of issues in design and elitism, as well as theoretical and methodological issues that we must all consider when moving our research trajectory from “studying up” to “studying with.” (read more...)