Finally! 2015 is the year of soils! Ready the celebration. Polish your spade, pick, and shovel, and carefully wrap those gifts of organic fertilizer you’ve been hiding away. It’s going to be a hell of a party.
Humor aside, soil is obviously important in a number of very complex ways. The Food and Agriculture Organization of the UN (FAO) is spearheading the “2015 International Year of Soils” initiative to raise awareness of soil issues for food systems and broader environmental concerns. The director of the FAO, José Graziano da Silva, had the following to say of the importance of soil: “The multiple roles of soils often go unnoticed. Soils don’t have a voice, and few people speak out for them. They are our silent ally in food production” (as quoted on FAO’s website).
Yet as I’ve found researching soil conservation in Haiti in 2012 and examining the history of soil conservation more broadly, it seems that many people have spoken out for soils. In fact, through the panic related to the 1930s dustbowl crisis in the United States, soil erosion arguably became the first global environmental problem (Anderson 1984). This rapid spread of environmental concern highlights the way that soil has, in the past, captured the imagination and emotion of governments around the world. But the spread of soil conservation was not the seemingly de-politicized “awareness” campaign that we’re presented with by the FAO. Rather, in the 1930s, soil conservation was rooted in a desire to control and manipulate rural farmers. So while I’d agree with Mr. da Silva that soils do not have an “audible” voice, I’d argue that we need to pay far more attention to who speaks for soils and why. « Read the rest of this entry »
There have been surprisingly few sustained, ethnographic studies of the university that aim to understand it as an institution devoted at once to the production of knowledge and technologies, the circulation of those products, and the cultivation of particular types of subjects. Ethnographers have largely worked at it piecemeal, with admittedly excellent work from both the anthropology of education and of science carving out various areas of inquiry: classrooms, laboratories, admissions offices, student groups, start-up incubators. To my mind, it seems that the lack of a synthetic approach to the knowledge work going on in the university might be due to the disappointing fact that these two camps within anthropology don’t talk to each other very much. In part, this is a result of their different goals, positions within the ecology of anthropological knowledge production, possible sources of research funding, and available career paths both within and without academia; yet, despite the sociological intelligibility of this lack of communication, it remains intellectually unfortunate.
As the business of research and education becomes increasingly corporatized, increasingly shaped by wider forms of rationality that rely upon quantification, standardization, and the devolution of responsibility to the individual, it becomes correspondingly urgent to develop a rigorous, holistic understanding of the university as such. This has only been underscored by my fieldwork among Russian data scientists, who are themselves involved in the ongoing reorganization of higher education here. That is to say, the neoliberal university qua institution, with its own internal forms of organization and expertise as well as its place within the broader political economy, deserves to be the object of a newly shared inquiry. The current shape of the university has profound implications for the professional lives of anthropologists of both science and education, and similarly thorough-going epistemological consequences for their ongoing, ultimately complementary attempts to understand how contemporary people make knowledge.
I’m working through the latter half of this proposition in my current research project. Data science has emerged as a key site of intervention into the educational system in Russia; elites from both industry and academy are working together to modernize and re-purpose Russia’s formidable pedagogical infrastructure in pure mathematics and theoretical computer science to train a new generation of algorithmists, developers, and programmers in both the practical skills and professional attitudes that they see as necessary for the creation of a truly Russian knowledge economy. The result has been both the creation of a number of hybrid, industrial-academic institutions and wide-ranging modifications to curriculum and requirements at more traditional institutions. These changes are occurring within a broader context of profound reforms to post-graduate education1 and the science system more generally.2 « Read the rest of this entry »
Alan Turing was involved in some of the most important developments of the twentieth century: he invented the abstraction now called the Universal Turing Machine that every undergraduate computer science major learns in college; he was involved in the great British Enigma code-breaking effort that deserves at least some credit for the Allied victory in World War II, and last, but not the least, while working on building early digital computers post-Enigma, he described — in a fascinating philosophical paper that continues to puzzle and excite to this day — the thing we now call the Turing Test for artificial intelligence. His career was ultimately cut short, however, after he was convicted in Britain of “gross indecency” (in effect for being gay), and two years later was found dead in an apparent suicide.
The celebrations of Turing’s birth centenary began three years ago in 2012. As a result, far, far more people now know about him than perhaps ever before. 2014 was probably the climax, since nothing is as consecrating as having an A-list Hollywood movie based on your life: a film with big-name actors that garners cultural prestige, decent press, and of course, an Academy Award. I highly recommend Christian Caryl’s review of the The Imitation Game (which covers Turing’s work in breaking the Enigma code). The film is so in thrall to the Cult of the Genius that it adopts a strategy not so much of humanizing Turing or giving us a glimpse of his life, but of co-opting the audience into feeling superior to the antediluvian, backward, not to mention homophobic, Establishment (here mostly represented by Tywin Lannister, I’m sorry, Commander Denniston). Every collective achievement, every breakthrough, every strategy, is credited to Turing, and to Turing alone. One scene from the film should give you a flavor of this: as his colleagues potter around trying to work out the Enigma encryption on pieces of paper, Turing, in a separate room all by himself, is shown to be building a Bombe (a massive, complicated, machine!) alone with his bare hands armed with a screwdriver!
The movie embodies a contradiction that one can also find in Turing’s life and work. On one hand, his work was enormously influential after his death: every computer science undergrad learns about the Turing Machine, and the lifetime achievement award of the premier organization of computer scientists is called the Turing Award. But on the other, he was relatively unknown while he lived (relatively being a key word here, since he studied at Cambridge and Princeton and crossed paths with minds ranging from Wittgenstein to John Von Neumann). Perhaps in an effort to change this, the movie (like many of his recent commemorations) goes all out in the opposite direction: it credits Turing with every single collective achievement, from being responsible for the entirety of the British code-breaking effort to inventing the modern computer and computer science. « Read the rest of this entry »
Georges Doriot, who founded the first publicly traded venture capital firm in 1946, arguably announced a new regime of speculative capital when he said: “I want money to do things that have never been done before” (Ante 2008). In the years immediately after World War II, the establishment of venture capital firms was crucial to the ascent of a new kind of commercial enterprise, one that has profoundly influenced the development of digital technologies on a very broad scale. It was with the creation of the first venture capital firms that a financial network to support technology startup companies began to form. The fact that the earliest Silicon Valley startups were funded by venture capital investments is an indicator of the degree to which the developmental trajectory of personal computing has been intertwined with that of finance capital. Fairchild Semiconductor, for example, was the first startup funded by venture capital (in 1957), and it launched numerous “spin-off” companies that were collectively responsible for the innovations that enabled what became the microelectronics industry. Since then, of course, venture capital has grown into a powerful industry that directs vast financial resources into technology startup companies. But venture capital investment doesn’t only fuel the tech startup economy — it actively shapes it.
Research on Silicon Valley’s high tech industry suggests that venture capitalists’ importance to processes of innovation has more to do with their role in selecting promising companies than with simply providing financing itself (Ferrary & Granovetter 2009). Beyond choosing the criteria for valuation by which the potential commercial success of startups is measured, they determine which innovations will even have a chance to enter the market. The result is that Silicon Valley innovation is guided directly by finance capital’s future-oriented logic of speculation. Companies with few tangible assets pursue funding without which they will have little chance to successfully launch their products and, as business news coverage attests, some companies are valued at billions of dollars without demonstrating that they have the means to become profitable. What matters is keeping the possibility of a market open. One could think, for example, of Snapchat, a popular photo and video sharing app that has expanded through significant venture capital investments. Last year, the company was valued at $10 billion, despite the fact that it generates almost no revenue, mostly on the basis of its potential to reach users and create an audience.
“Crowd” and “cloud” computing are exciting new technologies on the horizon, both for computer science types and also for us STS-types (science and technology studies, that is) who are interested in how different actors put them to (different) uses. Out of these, crowd computing is particularly interesting — as a technique that both improves artificial intelligence (AI) and operates to re-organize work and the workplace. In addition, as Lilly Irani shows, it also performs cultural work, producing the figure of the heroic problem-solving innovator. To this, I want to add a another point: might “human computation and crowdsourcing” (as its practitioners call it) be changing our widely-held ideas about experts and expertise?
Here’s why. I’m puzzled by how crowdsourcing research both valorizes expertise while at the same time sets about replacing the expert with a combination of programs and (non-expert) humans. I’m even more puzzled by how crowd computing experts rarely specify the nature of their own expertise; if crowdsourcing is about replacing experts, then what exactly are these “human computation” experts themselves experts on? Any thoughts, readers? How might we think about the figure of the expert in crowd computing research, given the recent surge of public interest in new forms of — and indeed fears about — this thing called artificial intelligence?
A book I wrote, Developer’s Dilemma [Press, Amazon Physical Book, Amazon Kindle, iBooks], was recently published by MIT Press. It is an ethnography that explores the secretive everyday worlds of game developers working in the global videogame industry. There is an excerpt of the book over at Culture Digitally if you’re interested in checking out some of the words prior to making a commitment to the rest of the text.
But I didn’t really want to start this year off just plugging my book. I mean, I did plug it. Just then. You should check it out. But that isn’t the point of this post. I recently Skyped into Tom Boellstorff‘s graduate seminar to discuss the book. One of the questions they asked me had to do with “game talk” and if I thought game talk had to do more with boundary policing than it had to do with actually having real utility and functionality. Game talk, in essence, is the use of game names as a shorthand means by which to reference the rather complex mechanics and ideas that set certain games apart. It was a wonderful question, because in the book I write:
(Michael Sacasas is a PhD candidate in the “Texts and Technology” program at The University of Central Florida. He blogs about technology at The Frailest Thing. This post follows on our conversation from earlier in the year which touched on some of the foundational work on the relationship between western religion and technology.)
I am glad you brought up Nye’s pessimism over the consumer sublime and his consternation over the potential drying of the technological well. Nye wrote of the consumer sublime, as embodied by Las Vegas, as a “rush of simulations” and as marking a change from a technological sublime emphasizing production, particularly in the sense of new knowledge, to one concerned solely with consumption. How do you see the relation between simulation and technological production? Do you think Nye’s pessimism is warranted?
Timely question. There’s been more than a little angst of late about technological stagnation, much of it recently associated with PayPal founder Peter Thiel. For the past few years, Thiel has been warning about trends which, in his estimation, suggest that technological innovation may have stalled out over the last thirty or so years. We were promised flying cars, he is fond of saying, and we got 140 characters instead (a passing shot at Twitter, of course).
« Read the rest of this entry »
This anthropocene thing has really taken hold. We’re caught in the grips of extinction, visualizing our own end (or at least visualizing the data of our own end), urgently calling upon each other to act, convincing ourselves that we have the power – scientifically, technologically and maybe politically – to do something about it. We can organize marches, resurrect species, bank seeds, manipulate clouds, make videos of collapsing ice caps, drive hybrids, fly to space stations. Of course, our worry over the planet’s health is narcissistic, in the end. It’s not the planet’s survival we are worried about. It’s our own, human future.
These anthropocentric worries over human continuity make for a strange tension in the theoretical moment: they are appearing just as a range of disanthropic moves have attempted to decenter and displace the human as subject, agent, or figure: Actor-Network Theory, Post-Humanism, multi- and interspecies analytics, Object Oriented and other “ontological” turns, speculative realism and new materialism, to name a few. Despite this turn away from the human, however, the final disappearance of the species seems to mark a limit for most disanthropic theorists; few welcome the possibility of human extinction. Disanthropy yes, misanthropy no. « Read the rest of this entry »