A book I wrote, Developer’s Dilemma [Press, Amazon Physical Book, Amazon Kindle, iBooks], was recently published by MIT Press. It is an ethnography that explores the secretive everyday worlds of game developers working in the global videogame industry. There is an excerpt of the book over at Culture Digitally if you’re interested in checking out some of the words prior to making a commitment to the rest of the text.
But I didn’t really want to start this year off just plugging my book. I mean, I did plug it. Just then. You should check it out. But that isn’t the point of this post. I recently Skyped into Tom Boellstorff‘s graduate seminar to discuss the book. One of the questions they asked me had to do with “game talk” and if I thought game talk had to do more with boundary policing than it had to do with actually having real utility and functionality. Game talk, in essence, is the use of game names as a shorthand means by which to reference the rather complex mechanics and ideas that set certain games apart. It was a wonderful question, because in the book I write:
Deflategate, or Ballghazi, and the Conundrum of Expertise (or: why anthropologists should write about football)
It is the week of Super Bowl Sunday and I live with a Patriots fan. For the last two weeks all serious conversation in our house has revolved around some aspect of the upcoming game. Unless you have been living under a rock (or inside a book), you can probably guess that most of our conversations center around why a set of footballs used by the Patriots during the AFC Championship game were found to be under the minimum psi level specified by the NFL. Were the Patriots cheating by manually deflating footballs? Or is there a “natural” explanation for the deflation?
The interesting question from an STS perspective, and the hinge which cheating allegations revolve around, is whether or not the atmospheric conditions at the AFC championship game could have caused a football to deflate what the NFL has called “a significant amount.” The question is a thorny one because it is entirely unclear who counts as an expert on football deflation, where one might turn to find an expert opinion, or even what criteria might be appropriate in determining who is, or is not, an expert on football deflation. Worse, how might one find a deflation expert who does not have a rooting interest for or against the Patriots at this late date? In short, who may enunciate the truths of football deflation?
Patriots head coach, and noted gridiron alchemist, Bill Belichick was the first to turn to science for an explanation. Like a modern day Boyle, he held a press conference in which he detailed an experiment conducted at the Patriots facility which he claimed demonstrated that natural conditions caused “significant” football deflation at the AFC Championship game. His explanation was detailed and involved a special method of preparing the football for play (that is, getting the correct feel for the quarterback) that can change the psi level without manual deflation.
As the co-chairs of CASTAC, we’re taking this opportunity to thank you for visiting The CASTAC Blog and to share our plans for 2015 and beyond! But first, we’d like to introduce ourselves.
I’m Jenny Carlson, continuing co-chair of CASTAC. For those new to CASTAC and its blog, I’m a visiting assistant professor of anthropology at Southwestern University, as well as a visiting research fellow at Rice University’s Center for Energy and Environmental Research in the Human Sciences. I work on the everyday, affective dimensions of energy transitions in Germany and, more recently, in the United States. I focus on ordinary structures of feeling at sites of small-scale energy development, exploring how sentiments shape infrastructures for producing energy and engaging in politics. My aim is to theorize how the politics of energy unfolds among those who live at sites of energy development but don’t formally participate in these projects and, going from this vernacular politics, to better understand how site-specific dynamics push back against policy projections, offering a more nuanced perspective on the social underpinnings of participation in areas of rapid technoscientific development.
And I’m Nick Seaver, writing from UC Irvine, where I’m a PhD candidate in anthropology and a researcher with the Intel Science and Technology Center for Social Computing. I succeeded longtime co-chair Jennifer Cool, whose hard work has enabled our interest group to not only survive, but thrive as part of the AAA’s General Anthropology Division. I research the development of algorithmic recommender systems for music — yes, like Pandora — among a broad network of academic and corporate researchers, engineers, and scientists in the US. I’m very interested in the resonances between these algorithmic approaches to “culture” and those from anthropology’s past, so I am also researching the history of computing in sociocultural anthropology. My goal is to gain some analytical purchase for anthropologists on those things we call “big data” or “algorithms” — to enhance our ability to make critiques that are informed and have impact, and to recognize the continuities between these “new” phenomena and older technologies we are more familiar with.
In retrospect, 2014 may appear a pivotal year for technological change. It was the year that “wearable” technologies began shifting from geek gadget to mass-market consumer good (including the announcement of the Apple Watch and the rising popularity of fitness trackers), that smartphone and tablet usage outstripped that of desktop PCs for accessing the Internet, along with concurrent interest in home automation and increasingly viable models for pervasive computing (such as Google’s purchase of smart thermostat Nest), and that computer algorithms, machine learning, and recommendation engines came increasingly to the fore of public awareness and debate (from Apple buying streaming service Beats to the effects of Facebook’s algorithms). Many of these shifts have been playing out world-wide, or at least, in diverse contexts, such as Chinese online retailer Alibaba going public and Xiaomi smartphone maker speedily surpassing most rivals. It also proved to be an exciting year on The CASTAC Blog, where our team of Associate Editors and contributors brought our attention to this rapidly shifting technological landscape, and to pressing questions and debates driving anthropological inquiry into science and technology.
In today’s post, I continue my predecessor Patricia Lange’s tradition of reviewing themes and highlights on the blog from the past year. Some of these are topical, and included energy, the environment, and infrastructure, crowdsourcing and the “sharing” economy, wearables, algorithms and the “Internet of Things,” science communication, science’s publics, and citizen science, while others were more conceptual or even experimental—reflections on longterm ethnographic engagement with technology, broader issues of scientific (and ethnographic) authority, technological infrastructures as social infrastructures and tacit knowledges (such as Jenny Cool’s co-chair report), and broadly, how to make anthropological research into science and technology relevant within and beyond academic circles.
When Jennifer Cool, Jordan Kraemer and I co-founded this blog we began on a web page and a prayer, or if you prefer, an incantation. Drawing on an “if you build it, they will come” inspiration, we felt that starting a blog would be a great way to encourage more conversation about science and technology studies. As members of CASTAC, the Committee on the Anthropology of Science, Technology and Computing, we felt excited about the organization’s goals, and we sought ways to connect to the other members of the group who chose to hang their hat in this corner of the American Anthropological Association.
We launched with a “start-up” mentality in which content was king. Our goal was to bring in guest authors while also sharing our work. Our initial goals were modest: as long as we could consistently put up one interesting post per week, we were happy. I was excited to see our blog grow and eventually garner several hundred views a month. Going forward, we realized we would need to create a sustainable model to expand the blog’s content and reach, and thus the idea of an Associate Editing team was born. I crafted a structure roughly modeled after publication organizations in which Associate Editors (AEs) managed particular “beats” or specific topic areas of interest. The idea was to encourage AEs to contribute posts about their own research as well as solicit exciting up-to-date content from other CASTAC members, researchers, and practitioners engaged in projects conducted within the auspices of the anthropology and sociology of science, technology, and computing. « Read the rest of this entry »
(Michael Sacasas is a PhD candidate in the “Texts and Technology” program at The University of Central Florida. He blogs about technology at The Frailest Thing. This post follows on our conversation from earlier in the year which touched on some of the foundational work on the relationship between western religion and technology.)
I am glad you brought up Nye’s pessimism over the consumer sublime and his consternation over the potential drying of the technological well. Nye wrote of the consumer sublime, as embodied by Las Vegas, as a “rush of simulations” and as marking a change from a technological sublime emphasizing production, particularly in the sense of new knowledge, to one concerned solely with consumption. How do you see the relation between simulation and technological production? Do you think Nye’s pessimism is warranted?
Timely question. There’s been more than a little angst of late about technological stagnation, much of it recently associated with PayPal founder Peter Thiel. For the past few years, Thiel has been warning about trends which, in his estimation, suggest that technological innovation may have stalled out over the last thirty or so years. We were promised flying cars, he is fond of saying, and we got 140 characters instead (a passing shot at Twitter, of course).
« Read the rest of this entry »
More than three months ago I wanted to write about the ethnographic butterfly effect and a key informant’s book. But there were strange things happening around games and social media at the time coupled with tragic events in Ferguson, Missouri. So I wrote about those things. It is more than three months later and there are still strange things happening in social media around games and everything in Ferguson, Missouri (and other parts of the United States) is somehow impossibly more sad.
So I’m going to write about the ethnographic butterfly effect and a key informant’s book on the game Jagged Alliance 2. « Read the rest of this entry »
2014 was the year that the major players in qualitative data analysis (QDA) software released native versions for the Mac. For me, the timing was perfect: my dissertation fieldwork in North Dakota had drawn to a close by summer’s end, and my advisor was encouraging me to roll up my sleeves and start working through my material. I wasn’t sure which software package would serve me best, though, and most of the guidance I could find around the Web declined to make head-to-head comparisons. Then, too, I was mindful of the critiques charging that QDA software of any stripe contributes to the mystification of method and amounts to an overpriced means of avoiding index cards, glue, and scissors. I have nothing against index cards, but with operating system issues off the table and student licenses available for under $100, I decided to see if one of these tools could help me to organize my data and get writing.
After sizing up the available options, I downloaded trial versions of two well-known QDA products: NVIVO and Atlas.ti. I knew I was looking for an attractive and intuitive user interface that would allow me to code data in multiple formats: handwritten field notes, interview transcripts, documents I collected in the field. I had little faith that calculating the frequency and co-occurrence of the codes I assigned would unlock some deep, hidden structure of my material. But, taking a cue from one of the founding texts of software studies, I resolved to approach QDA software as an object that “deserves a reciprocation of the richness of thought that went into it, with the care to pay attention to what it says and what it makes palpable or possible.” How, I wondered, would my choice of software package make some kinds of analytical thinking possible and forestall others? What would my choice commit me to? « Read the rest of this entry »