The CASTAC community joined together in 2012 to launch this blog and begin dialogue on contemporary issues and research approaches. Even though the blog is just getting off the ground, certain powerful themes are already emerging across different projects and areas of study. Key themes for the coming year include dealing with large data sets, connecting individual choices to larger economic forces, and translating the meaning of actions from different realms of experience.
Perhaps the most visible trend on our minds right now involves dealing with scale. How can anthropologists, ethnographers, and other STS scholars address large data sets and approaches in research and pedagogy, while also retaining an appropriate relationship to the theories and methods that have made our disciplines strong? As we look ahead to 2013, it would seem that a big question for the CASTAC community involves finding creative and ethical ways to deal with phenomena that range from the overwhelmingly large to the microscopic, in order to provide insight and serve our constituents in research and teaching.
Discussing large-scale forays into education and research
In the past two weeks in her posts on MOOCs in the Machine, Jordan Kraemer, our dedicated Web Producer, has been reflecting on how higher education is grappling with MOOCs, or “massive open online classes,” which open up opportunities to those who have been shut out of traditional elite institutions. At the same time, serious questions emerged about the ramifications of trade-offs between saving money and providing high-quality education. Kraemer points out that much of the debate ties into larger arguments about why it is that people have been shut out of education and how concentration of wealth and the neoliberalization of the university are challenging the old equation of supporting open-ended research that ultimately strengthens and supports teaching. She proposes new forms of graduate education in which recent graduates are supported by their universities with teaching jobs, to complete teaching experience, transfer teaching loads from full-time faculty, and support graduate students as they transition into full-time positions.
Part of the issue with MOOCs has to do with questions of scale, and how or whether individual lectures and course preparation can be generalized to large-scale audiences in ways that provide solid instruction without compromising quality. Higher-education depends upon staying current with research, and so far, we do not have enough evidence to support the idea that MOOCs will work or will address all of the concerns emerging from the neoliberalization of the academy. Those of us interested in online interaction and pedagogy will be watching this space closely in the coming year.
Questions of scale also came into play with Daniel Miller’s discussion of doing Eight Comparative Ethnographies. Miller argues that doing several ethnographies at the same time will enable comparative questions that are not possible when investigating one site alone. He provides an example from social network sites. He asks, to what extent are particular behaviors the product of a type of site, a single site, or the intersection of cultures in which a site is embedded? Is the behavior so because it is happening on Facebook or because the participants are Brazilian? A comparative study enables a level of analysis that is more inclusive than that derived from a single study. Expanding scale without compromising the traditions and benefits of ethnographic work remains a challenge for these and other large-scale projects in the future, which have the potential to provide crucial insights.
Making small-scale choices visible
As one set of researchers bring up issues with regard to enormously large-scale education and research, other STS participants on The CASTAC Blog are dealing with the opposite issue, which involves grappling with how the dynamics of extremely personal and individualistic acts—such as the donation of sex cells—interact with large-scale economic and cultural forces. In her post on The Medical Market for Eggs and Sperm, Rene Almeling, the winner of the 2012 Forsythe Prize, provides an inside look into how human beings’ donations of sex cells are connected to much larger economic forces that play out differently for women and men. Women are urged to regard egg donation as a feminine act of a gift; men are encouraged to see donation as a job. Almeling ties our understanding of what might be an individual act into economic forces, as well as gendered, cultural expectations about families and reproduction. Gendered framings of donation not only impact the individuals who provide genetic material, but also strongly influence the structure of the market for sex cells.
Another key issue on our minds has to do with dealing with personal responsibility and showing how individual choices impact much larger social and economic forces in finance, computing, and going green.
In his post, On Building Social Robustness, David Hakken raises the question of how individuals contributed to large-scale economic and social crises, such as the recent disasters in the world of finance. His project is informed by work that is trying to deal with the first “5,000 years” in the history of debt. He proposes developing a notion of social robustness, parallel to the idea of the technical notion of robustness in computer science.
His work provides an intriguing use of ideas from people whom we study, and applying them as an inspiration for making social change. When Hakken asks about the extent to which computing professionals are ethically responsible for the financial crisis, he is proposing a way of asking how a large-scale disaster can be traced to more individual, micro-units of action. By investigating these connections, his project informs a conversation that is increasingly picking up steam in the area of the anthropology of value.
Hakken’s reflections are especially haunting as he warns of the difficulties of building a career in anthropology and STS. As he is moving towards retirement, his perspective is especially valued in our community. As an antidote to more provincial institutional perspectives, he urges a more consolidated and community approach that involves supporting each other in doing the important work that the CASTAC community has the potential to achieve.
Questions of scale and responsibility are once again intertwined in David J. Hess’s post on Opening Political Opportunities for a Green Transition. Hess points out that a non-partisan political issue has become partisan despite the fact that the planet has now surpassed a carbon dioxide level that it has not had for at least 800,000 years! But because change is imperceptibly slow to the human eye, politics is allowed to complicate change. Hess has worked to investigate what he calls the “problem behind the problem,” which involves the lack of political will to address environmental sustainability and social fairness, which considerably worsens the environmental problem itself. He provides real solutions through an ambitious three-part series of books that propose “alternative pathways” or social movements centered on reform in part through the efforts of the private sector.
Notably, personal experiences in anthropology inform Hess’s work. Although he is in a sociology department and in an energy and environment institute, he points out that an anthropological sensibility continues to inform his thinking. While the discourse on these issues has traditionally revolved around a two party system, Hess’s more anthropological approach makes visible other ideologies such as localism and developmentalism that may pave a more direct path to “good green jobs” and a more sensitive and responsible green policy. Again interacting with questions of scale, Hess’s notions of responsibility are grounded in understanding the “broad contours” of the “tectonic shifts” of ideology and policy that are underway in working toward a green transition in the United States and around the world. Without real action, however, his prognoses remains pessimistic.
Translating phenomena across different realms of experience
A theme that also emerged from our nascent blog’s initial posts had to do with understanding the ramifications of processing one realm of experiencing by using metaphors and concepts from another. In her post on the Anthropological Investigations of MIME-NET, Lucy Suchman explores the darker side of entertainment and its relationship to military applications. She investigates how information and communication technologies have “intensified rather than dissipated” what theorists have described as the “fog of war.”
The problem is partly one of translation. How is it possible to maintain what military strategists call “situational awareness,” which has to do with maintaining a constant and accurate mental image of relevant tactical information. Suchman is studying activities such as The Flatworld Project, which bring together practitioners from the Hollywood film industry, gaming, and other models of immersive computing to understand these dynamics. Such a project also involves analyzing how such approaches “extend human capacities for action at a distance,” and present ethical challenges to researchers as they grapple with military realms and connecting seemingly disparate but interrelated areas such as war and healthcare.
Lisa Messeri’s post, Anthropology and Outer Space, offers an absolutely fascinating look into human conceptualization of place. She asks, why should earthlings be concerned about what is happening on Mars? Her work focuses on how “scientists transform planets from objects into places.” Significant milestones in space exploration such as the passing of Venus between the Earth and the Sun (not scheduled to do so again until 2117) and the landing of the Mars rover, Curiosity, provide rich areas to mine for understanding cultural notions of place and human exploration. Curiosity has its own Twitter account (!) and tweets freely about its experience of “springtime” in its southern hemisphere. Messeri argues that this kind of language “bridges” our worlds in that Curiosity somehow seems to experience something that is familiar to humans—springtime. Scientists are now studying things that are so far away that telescopes cannot take an image of them. Somehow, these “invisible” objects become familiar and complex. Planets begin to seem like places because of the way in which language “makes the strange familiar,” and bridges the experience between events on an exoplanet and life on Earth.
Astronomers become place makers, and observing these processes shows how spaces become “social” even as Messeri argues, “humans will never visit such planetary places.” Messeri shows how such conceptualizations can lead to the spread of erroneous scientific rumors that get reported on national news organizations. Her work shows not only how knowledge production is compromised by the use of such metaphors but also provides an intriguing look at how humans process invisible objects through the cultural production of imagined place.
Tune in next week!
Given that questions of scale were on our minds in 2012, it is especially fitting that we launch 2013 with a discussion about Big Data, and the challenges and opportunities that emerge when entities collect and combine huge data sets that are far too large to handle through ordinary coding schemes or desktop databases. Social scientists, technologists, and other researchers must grapple with numerous issues including legibility, data integrity, ethics, and usability. I am particularly pleased that David Hakken agreed to be interviewed by The CASTAC Blog to discuss his views. Next week, he provides fascinating insights into what the future holds for dealing with Big Data!
Before signing off, I would like to thank everyone for their participation in The CASTAC Blog, especially those who wrote posts, left comments, read articles, and tweeted our posts to the world. I very much appreciated everyone’s participation. The richness of the posts makes it too difficult to adequately cover all the content of the past year in one commentary, but rest assured that everyone’s post is contributing to the conversation and is valued by the CASTAC community.
In an effort to include more voices and keep a continuing flow of content, The CASTAC Blog is now seeking a core group of “frequent” contributors to keep pace with new developments in this space in 2013. Notice that I use the term “frequent” sparingly—even a few posts throughout the year makes you a frequent contributor. Please consider sharing your thoughts and views with the CASTAC community. If you would like to join in, please email me at: email@example.com.
I look forward to an interesting and productive year ahead!
Patricia G. Lange
The CASTAC Blog
This is the second half of my conversation with Dominic Boyer about the emergence of “infrastructure” as both ethnographic focus and analytic within anthropology. You can read the first part of the interview here!
Ian Lowrie: I’d like to circle back to the question of how infrastructure is related to politics and liberalism. There’s a recent article by Kim Fortun calling for a revitalized, engaged anthropology of not just infrastructure, but infrastructural expertise, in the context of precisely the degradation of the most visible aspects of our infrastructure. At the same time, I think we also see strong, robust development of other types of infrastructures. Things like technical arrangements, financial instruments, logistical services, the computational and digital. I wonder if part of what makes the urge to expand the concept of infrastructure to include things other than things like roads and sewers is a political urge.
Dominic Boyer: I think it is, and I think you’re right to point out that the story of infrastructure in the neoliberal heyday is not simply about abandonment. It’s a story of selective investment, and also of abandonment [laughs]. This is also the era in which informatic infrastructures, for example, develop. The Internet is one, but also the specialized information infrastructures that allowed finance to exert global realtime power that far exceeds the capacities of most governments to effectively regulate it. And that becomes a pivotal part of the story of the rebalancing of powers, I think, during the same time period. So the neoliberal era saw some remarkable infrastructural achievements in certain areas, whereas at the same time you might find your roads and your sewers decaying, which is interestingly often-times the focus of infrastructure studies. Most seem focused on what I would describe as basic biopolitical infrastructures and their fragmentation. A lot of research is, more or less latently, interrogating the aftermath of neoliberalism, specifically through the lens of biopolitical infrastructural decay. But you could tell a different story if you looked at different infrastructures. And maybe that’s a story that still needs to be told.
Ian: You’ve spent a lot of time thinking about social theory, often as ethnographic object, in your own work. I wonder, what do you think the role or function of social theory can be in this context? That is, in a post-neoliberal aporia, in which we find ourselves crushed by the immensity of the problems facing us, at both the ecological and political levels. What do you think the function of theoretical projects like investigating infrastructure is today?
Dominic: I think that there’s a great need for theoretical reinvention at this moment. This is a time in which we really do need to change the types of questions we’re asking, as well as I think to change the modes of communication and cooperation that we have. One thing that I find dismal is the extent to which many of the really fruitful and important voices in the anti-anthropocentric movement are sort of falling into the typical pattern that socialism did in the nineteenth century, factions fighting with each other for dominance, everyone asking: “Who’s got the better concept?” That’s still a very alpha-male sort relationship to theory, and I think that’s part of the impasse that we’re facing. I think we need to remember that the roots of the anti-anthropocentric turn are ultimately to be found in feminism and ecofeminism. Thus, there has to be a shift in the ethics, habits, and institutions that we use as theoretically minded scholars at this moment. Cooperation is so much more important than making sure that your concept work finds its own school and acolytes. The project should be to create reinforcing circles that can help not only clarify important theoretical problems, but find ways – creative ways – of changing how we get our messages out there. At CENHS, we’re thinking now about working with game designers, for example, because we’re not sure that a book or an academic article will have a sufficient impact. Necessary, yes, but not sufficient. We have to work with artists closely, we have to work with designers closely, we have to work with media professionals closely. The thing about the anthropocene is that as it becomes clearer and clearer that we can’t negate it, we can’t simply ignore it, that the project of survival is going to require new forms of cooperative work and cooperative thinking. That we need to radically transform our dependency upon the forms and magnitudes of energy we use is increasingly obvious. And we already have much of the technology we would need to accomplish that transition. But to get to the point of developing a sustainable infrastructure for a low carbon, low toxin modernity is a question of political will and of public understanding. Because it’s not going to involve easy fixes. So theoretical clarification can help, but also I think scholars we have to find ways to inform and inspire beyond our walls.
Ian: You’ve also just started work as part of the editorial collective for Cultural Anthropology, with Cymene [Howe] and Jim [Faubion]. And I wonder what you have planned for that journal as it transitions to open-access, online format, to make it a space for collaboration, rather than part of this broader technique of control that you elsewhere call digital liberalism?
Dominic: The biggest operational challenge we are facing is the open access transition. I don’t want to go into great detail about that because I think that much of that story is available elsewhere. The short version is that in the digital era, expensive-to-produce content can be reproduced with great simplicity, and that doesn’t fit well in a capitalist model, frankly. I think non-profit and low-profit academic publishing is the future. And the model has to be about sharing labor. If we’re all good about doing some small part of it competently and efficiently, this apparatus of a fairly large organization of cooperative labor can continue to put out a journal of high quality, but in a model that is truly non-profit. The thing is, by and large, we’ve found that people are pretty willing to participate as long as the apparatus is well- and fairly organized. We had a 98% success rate in inviting much esteemed, very busy people to join CA’s editorial board. That suggests to me that there’s not only an ability to do this, but there’s a great will out there, in the profession, to make this sort of a change. If we think of a journal as a cooperative project in which a large segment of the discipline are stakeholders, we can operate the journal without needing the for-profit apparatus of a publisher like Wiley. I think it’s the direction that all anthropology publishing should be moving toward. The reason that we’re putting quite a lot of time into the project at this early stage is to develop a sustainable infrastructure, a set of institutions, operations and principles, which could allow us to scale up open access and to carry it forward at the level of the discipline.
Ian: If the journal is an institution, it seems like the work that you’re describing is excellent evidence of what [Julia] Elyachar calls phatic labor, sociality as infrastructure, attendant to and around institutions.
Dominic Boyer: This raises questions of course, about, well, what is an infrastructure. Is it just what we used to call structure in the old structure and agency pairing? It feels to me that the connotations of “infrastructure” are more material than “structure” ever was. “Structure” belonged to the great heyday of mid-20th-century anthropology, culture theory, when this great explosion of creative semiological and hermeneutic analysis appeared. These were very much symbol-oriented, knowledge-oriented, thought-oriented models of construing what was distinctively human. It’s interesting to me that this synchronized more or less perfectly with the halcyon days of Keynesian modernity, with super-high growth rates across the Western world. My “Aha” moment came several years ago when I read Timothy Mitchell’s “Carbon Democracy” article. He argues that the only reason why those growth rates could stay so high was because oil was both practically and epistemically an inexhaustible, cheap-and-only-becoming-cheaper resource, and yet providing so much energy in a physico-material sense for all of the great institutions of modernity and all of its infrastructures, practices, habits, etc. For Mitchell, understanding and achieving “growth” is completely petropolitical. That understanding broke down with the formation of OPEC and the challenge to western control over the Middle East’s carbon fuel resources. And the Keynesian responses to the 1970s oil shocks fail because, in a sense, Keynesianism was carbon-based to its core, and couldn’t respond to a disruption of its carbon infrastructure or energopower.
Neoliberalism also didn’t have a good response to the crisis, but it certainly spoke a different language and could perform itself as an alternative. There was a cobbling-together of a sustainable carbon energy model over the ensuing decades. But now we realize that it’s not so much about problems of supply – with fracking and other developments, our energy demands seem to be covered. Peak oil will not save us anymore. Now we realize the problem is that carbon-fueled modernity is literally toxifying our environment, acidifying our oceans, and also creating climatological disruptions, now just tremors, but promising increasing levels of chaos and risk in the future. The problem is the anthropocene.
So the turn towards infrastructure, and the retreat from culture theory, for me are part of the same zeitgeist, the push-back against the preciousness of tarrying with semiotic systems in the era of what Anna Tsing has called “the damaged planet.” First we get the Marxian critiques – we get the Sid Mintzs and the Bill Roseberrys – scholars who argued, “Culture, sure. But what about power, what about domination?” That had its moment in the 60s and 70s, producing some amazing work, and then the vitality peters out, only returning again since 2008. I think that this hiatus had something to do with real-world Marxism becoming increasingly a more horrible, more obviously horrible, version of modernity than the liberal-capitalist modernity it was reacting against. But meanwhile, we’ve had first rumbles, and then storms gathering, marking the anti-anthropocentric turn, which brings us back from the semiological floating world, if I may be a little unkind, back to the concerns of the material, of the institutional, of the social, also in the posthuman sense of “social.” And so all the talk now about infrastructure seems to me like a kind of milestone on that path. As if to say: From here on out, at least for the foreseeable future, our theories of humanity and its presence and forms of life will have to take not only the symbolic and discursive seriously into account, but the material, and the technical, and the political. You can’t escape these questions any more, certainly not in the anthropocene.
In the media these days, Artificial Intelligence (henceforth AI) is making a comeback. Kevin Drum wrote a long piece for Mother Jones about what the rising power of intelligent programs might mean for the political economy of the United States: for jobs, capital-labor relations, and the welfare state. He worries that as computer programs become more intelligent day by day, they will be put to more and more uses by capital, thereby displacing white-collar labor. And while this may benefit both capital and labor in the long run, the transition could be long and difficult, especially for labor, and exacerbate the already-increasing inequality in the United States. (For more in the same vein, here’s Paul Krugman, Moshe Vardi, Noah Smith, and a non-bylined Economist piece.)
The clearest academic statement of this genre of writing is Erik Brynjolfsson and Andrew McAfee’s slim book Race Against The Machine. Brynjolfsson and McAfee suggest that the decline in employment and productivity in the United States starting from the 1970s (and exacerbated by our Great Recession of 2008) is a result of too much technological innovation, not too little: specifically, the development of more powerful computers. As they say, “computers are now doing many things that used to be the domain of people only. The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications.” To bolster their case, they introduce a variety of examples where programs are accomplishing intelligent tasks: machine translation, massive text discovery programs used by lawyers, IBM’s Watson and Deep Blue, the use of kiosks and vending machines and on and on. They end the book with a set of suggestions that would surprise no one who reads economists: more redistribution by the state, an education system that increases human capital by making humans partners with machines rather than their competitors, adopting a wait-and-see approach to the many new forms of collaboration that we see in this digital age (crowd-sourcing, the “sharing economy” etc.) and a few others.
Curiously though, Race Against the Machine has very little to say about the innards of this technology whose rise concerns the writers so much. How or why these programs work is simply not relevant to Brynjolfsson and McAfee’s argument. Usually, their go-to explanation is Moore’s Law. (Perhaps this is another way in which Moore’s Law has become a socio-technical entity.) To answer the question “why now,” they use a little fable (which they credit to Ray Kurzweil). If you take a chessboard and put 2 coins on the first square, 4 on the second, 16 on the 3rd and so on, the numbers start to get staggeringly large pretty soon. They suggest that this is what happened with AI. Earlier advances in computing power were too small to be noticed – but now we’ve passed a critical stage. Machines are increasingly going to do tasks that only humans have been able to in the past.
If, as an STS scholar, this explanation sounds a bit like hand-waving to you, I completely agree. How might one think about this new resurgence of AI if one thinks about it as a historian of technology or a technology studies scholar rather than as a neo-liberal economist? In the rest of this post, I want to offer a reading of the history of Artificial Intelligence that is slightly more complicated than “machines are growing more intelligent” or “computing power has increased exponentially leading to more intelligent programs.” I want to suggest that opening up the black-box of AI can be one way to think about what it means to be human in a world of software. And by “opening up the black box of AI,” I don’t mean some sort of straight-forward internalist reverse-engineering but rather, trying to understand the systems of production that these programs are a part of and why they came to be so. (In this post, I won’t be going into the political economic anxieties that animated many of the articles I linked to above although I think these are real and important – even if sometimes articulated in ways that we may not agree with.)
With that throat-clearing out of the way, let me get into the meat of my argument. Today, even computer scientists laugh about the hubris of early AI. Here is a document from 1966 about what from Seymour Papert calls his “summer vision project“:
The summer vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system. The particular task was chosen partly because it can be segmented into sub-problems which allow individuals to work independently and yet participate in the construction of a system complex enough to be real landmark in the development of “pattern recognition”.
One summer to create a system “complex enough to be a real landmark” in pattern recognition! Good luck with that!
Even back then, AI was not a static entity. Papert’s approach – which was the dominant one in the middle of the 1960s – was what Mikel Olazran calls “symbolic AI.” Essentially, in this approach, intelligence was seen to reside in the combination of algorithms, knowledge representation and the rules of symbolic logic. Expert systems that were built in the 1980s (and about whose “brittleness” Diana Forsythe has written memorably about) were the culmination of symbolic AI. These were built by AI experts going out and “eliciting knowledge” from “domain experts” through interviews (although as Forsythe points out, they would usually only interview one expert), and then crafting a series of intricate rules about what they thought these experts did.
The other approach was connectionism. Rather than linguistically-rendered rules constituting thought, connectionists suggested that thought came out of something that looked like a bunch of inter-connected neurons. This research drew extensively from the idea of the “perceptron” that Frank Rosenblatt conceived in the 1950s, and to the ideas of feedback in cybernetics. Perceptrons were discredited when Papert and Minsky, the leaders of the symbolic AI faction, showed that perceptrons could not model more than simple functions, and then revived in the 1980s with the invention of the back-propagation algorithm by Rumelhart and McLelland. As Olazaran shows, the perceptron controversy (i.e. the way perceptrons were discredited and then revived) is an excellent test-case for showing that science and technology in the making allow for a great deal of interpretive flexibility.
The revival of connectionism led to more and more interest in using statistical methods to do intelligent tasks. Today the name that computer scientists use for these methods is “machine learning” (or sometimes “data mining”). Indeed, it is machine learning that is at the heart of many of the programs discussed by Brynjolfsson and McAfee (and interestingly enough, the phrase does not appear at all in the book). The question of how this shift of interest to machine learning happened is open. Perhaps it was because funds for symbolic AI dried up; also, corporations had, by this time, started to increasingly use computers and had managed to collect a significant amount of data which they were eager to use in new ways. I, for one, am looking forward to Matthew Jones’ in-progress history of data mining.
The difference between machine learning/connectionist approaches and symbolic AI is that in the former, the “rules” for doing the intelligent task are statistical – expressed as a rather complicated mathematical function – rather than linguistic, and generated by the machine itself, as it is trained by humans using what is called “training data.” This is not to say that humans have no say about these rules. Indeed, many of the decisions about the kinds of statistical methods to use (SVMs vs. decision trees, etc.), the number of parameters in the classifying function, and of course, the “inputs” of the algorithm, are all decided by humans using contingent, yet situationally specific criteria. But while humans can “tune” the system, the bulk of the “learning” is accomplished by the statistical algorithm that “fits” a particular function to the training data.
|Expert systems as the culmination of symbolic AI. The “intelligence” of the system consists of rules that constitute the “inference engine” that embody the way the expert works.|
|In machine learning and connectionist approaches, the “intelligence” resides in a mathematical function that is learnt after feeding the algorithm “training” data.|
I want to suggest that the rise of machine learning was accompanied by three institutional changes in the way AI research was done: first that researchers stopped worrying about the generalizability of their algorithms and concentrated on specific tasks. Second, they concentrated their efforts on building infrastructures through which machine learning algorithms could be scaled and used as off-the-shelf components. And finally, they turned to networked computing itself to combine these machine learning algorithms with human actors. (The culmination of these is crowd-sourcing which is both a source of data for machine learning algorithms, as well as part of a scaled machine learning architecture to accomplish tasks using both humans and machines.)
AI practitioners today – in particular, machine learning researchers – are perfectly willing to be practical, to concentrate on particular domains rather than look for general purpose laws. For instance, consider this definition of machine learning given by Tom Mitchell (the author of a canonical textbook on the topic) in a piece written in 2006 – one of those pieces where the giants of the field try to set out the outstanding research problems for young researchers to solve.
To be more precise, we say that a machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.
Researchers concentrate on coming up with practical definitions of T, P and E that they can then proceed to solve. Back when I used to be a computer scientist designing algorithms to detect “concepts” in video, my adviser used to complain when IBM researchers designed classifiers to detect concepts like “Bill Clinton” or “Hillary Clinton.” His point was two-fold: first, that designing a classifier to detect “Bill Clinton” in a video was not good science because it in no way advanced the research agenda of detecting concepts in video. But, and second, IBM would do it anyway because it could (it had the resources) and because it would be useful.
Recently, Noam Chomsky criticized this tendency of AI researchers to use probabilistic models in computational linguistics. Probabilistic models were fine, Chomsky suggested, if all you wanted to do was make Google Translate work, but they were not real science, they were not causal. In response, Peter Norvig (co-author of the famous textbook on Artificial Intelligence and the director of research at Google) replied that probabilistic modeling works – and in a way that those elegant theories of the General Problem Solver never did. And Norvig goes even further in advocating that computer scientists not worry too much about generalized theories – because nature, he suggests, is far too complicated for that. (I really recommend reading Norvig’s response to Chomsky since it’s a kind of debate that computer scientists rarely have.)
The second factor is scale. Computer scientists have concentrated a lot of effort in these past few years to build infrastructures that make machine learning easier. This includes black-boxed, off-the-shelf, versions of these algorithms that can simply be trained and used. It also includes massive frameworks in which these algorithms can be coupled to each other and multiplied to build more and more complicated systems. So for instance, you connect a 1000 classifiers together (one for Bill Clinton, another for cars, etc.), and the result is a powerful system that can be used for “intelligent” tasks in video concept detection (“find all clips in videos where Bill Clinton is driven around” for e.g.). IBM’s Watson is exactly an example of such a massively scaled system – scaled to almost unimaginable levels solely so that it could match the human challengers in a particular task i.e. Jeopardy. In Watson there are classifiers whose output trains other classifiers and on and on.
The third factor is, I think, the most interesting. Machine learning researchers do not really expect their algorithms and programs to do all the work. Rather, they see these algorithms as working with humans, who might supervise them, tune them, provide them with data and interpret their results and so on. They seem to have taken to heart Terry Winograd’s final recommendation at the end of Understanding Computers and Cognition, in which he actively disavowed the symbolic AI that he had worked so hard to create, and instead suggested that computer scientists concentrate on designing useful assemblages of humans and computers rather than on creating intelligent programs. This is brought out beautifully in Alexis Madrigal’s recent story about Netflix’s recommendation algorithm. Briefly, Madrigal set out to find out how Netflix’s quirky genre system (Scary Movies Based on Books, Comedies Featuring a Strong Female Lead, etc., 76,897 micro-genres in all) is generated. The answer: through a combination of humans and machines (again, nothing new here for STS scholars). First, the Netflix engineers spent a few years creating recommendation engines using predictive algorithms that trained on the “ratings” that users provided before deciding that viewer ratings alone were not enough. They then started to employ people who “code” a movie (in terms of genre, etc.) based on a rigorous how-to manual that Netflix has created, especially for them. This movie-categorization was then combined with the ratings of Netflix viewers. By actively clustering both of these in some mathematical space, and then interpreting the clusters that result, they produced the quirky genres that Netflix has since become famous for. This is not a bad example of how AI is put to work today in systems of production. (It’s a pity that Madrigal focuses his attention so relentlessly on the specific recipe used to create the Netflix genres because his account offers fascinating glimpses of a whole host of behind-the-scenes work going on at Netflix).
These three factors – creating probabilistic machine learning classifiers for concepts without worrying about their generality, creating and using massive infrastructures to combine and mix-n-match machine learning algorithms, as well as combining the outputs of humans and machines in carefully human supervised ways – it seems to me that these explain better the revival of AI, as well as the reason why many people fear its political-economic implications. I will admit upfront that this is a mostly internalist story of what is happening and boils down to “Programs are being trained to do really particular tasks using data (perhaps specially created!) and then combinatorially combined in ad-hoc ways.” But I hope that it is a little more complicated than “computer power has increased by leaps and bounds” or “machines are getting more and more intelligent” (although neither of these are necessarily inaccurate). What I really hope is that it will help us think through the political economic implications of the new AI. And that, perhaps, is a subject best left for another day (given the length of this blog-post!).
At the 112th annual meeting of the American Anthropological Association last November, I was pleased to take the reins as co-chair of CASTAC alongside returning co-chair Jennifer Cool. I’d like to take this opportunity to thank my predecessor Rachel Prentice for all of her hard work in building our organization up to its current strength and numbers. In what follows, I’ll introduce myself and share some thoughts about CASTAC and its future.
I come to CASTAC and, more broadly, to science and technology studies via the study of sustainable development in non-urban spaces. My current project explores the intersection between renewable energy projects and ordinary life in a northern German village on the path to zero-sum living. Germany’s current “energy turn,” its transition from nuclear power to alternative energy sources, is transforming rural communities into sites of lucrative speculation, where capital investment and environmental politics take form around the technoscientific promise of renewables. In the two decades since the transition was coded into federal law, the village where I work has been terraformed by the installation of wind turbines, solar arrays and now biofuel processing technology. Practices that were already commonplace in the village (such as the harnessing of wind for land reclamation, the use of sun for heat or the use of biomass for fertilization) have been mutated and scaled up into engines of ecocapital (as wind turbines, solar panels, and biogas processing plants) at the same time that villagers have been recast as energy citizens who take part in the transition by recycling, installing solar panels or investing in wind parks or biofuel ventures.
In the midst of these events, my present work takes up one aspect of this process, namely the fluid technologies that congeal around technoscientific materials in ordinary life, and the repercussions these have for the public participation and social mobility amongst villagers. I theorize how energy comes to be known and re-membered in this space, both as a commodity and as a sense of animacy or vitality in everyday life, with an eye to the ways in which these understandings map onto class formations. As an ethnographer, I am particularly committed to articulating mundane sensory experience of renewable energy technologies, the feelingful registers through which energy takes form in everyday collisions of matter, and the ways in which energies teem and surge, becoming unruly in some contexts and “ruly” in others. This work continually challenges me to consider the implications of sensory ethnography and public culture studies for STS, and vice versa.
As I pursue my own research, I am reinvigorated by CASTAC members’ shared commitment to the ethnography of technoscientific worlds across human and nonhuman things. CASTAC was part of anthropology’s early forays into cultures of computing and digital technology. Today, we are experiencing a resurgence as emerging technologies bring new and unexpected ethnographic objects under the purview of STS. The CASTAC Blog is one valuable resource that has spun out of that resurgence, allowing anthropologists of STS to build bridges with diverse publics, including with STS scholars in other fields. CASTAC opens a space for dialogue with non-anthropologists as well as with anthropologists whose work speaks to our own, whether in political anthropology, environmental anthropology, feminist anthropology, queer anthropology, or other subfields. These connections are invaluable at a time when pressing social issues from income inequality to global warming compel novel approaches to technopolitical concerns. Additionally, university systems are in a state of transformation, where previously commonsense boundaries between academic and popular culture seem to be in flux, alternatively rigid and shifting, porous and impermeable. Situations such as these call for experimental approaches to making and sharing knowledge in a variety of settings, whether via methodological innovation, new ethnographic writing, and/or dialogue through multiple media. As the posts on this blog demonstrate, CASTAC members are committed to discussing their research in new and diverse venues, in hopes of collaboratively mapping new paths for action in an unsettled world. As a co-chair of CASTAC, I am pleased to help facilitate this important work, and welcome any questions or comments you may have regarding CASTAC and its projects in the days ahead.
Since my undergraduate days, I’ve both aspired to do feminist anthropology and been fascinated with people’s everyday engagement with mundane (and extraordinary) technologies. I can’t express how thrilled and honored I am to receive the 2013 Diana Forsythe Prize for The Life of Cheese: Crafting Food and Value in America (University of California Press), my ethnography of American artisanal cheese, cheesemaking and cheesemakers. I do not present a summary of the book here (if interested, the Introduction is available on the UC Press website: http://www.ucpress.edu/book.php?isbn=9780520270183). Instead, I alight on some of the STS-related themes that run throughout my book (and especially Chapter 6): regulating food safety and promoting public health, artisanal collaboration with microbial agencies, and the mutual constitution of production and consumption.
Real Cheese or Real Hazard — or Both?
By U.S. law, cheese made from raw (unpasteurized) milk, whether imported or domestically produced, must be aged at least 60 days at a temperature no less than 1.7˚C before being sold. The 60-day rule intends to offer protection against pathogenic microbes that might thrive in the moist environment of a soft cheese. But while the U.S. Food and Drug Administration (FDA) views raw-milk cheese as a potential biohazard, riddled with threatening bugs, fans see it as the reverse: a traditional food processed for safety by the metabolic action of good microbes—bacteria, yeast, and mold—on proteins and carbohydrates in milk. The very quality that gives food safety officials pause about raw-milk cheese — that it is teeming with an uncharacterized diversity of microbial life — makes handcrafting it a rewarding challenge for artisan producers, and consuming it particularly desirable for gastronomic and health-conscious eaters, drawn to its purportedly “pro-biotic” aspect.
I have introduced the notion of microbiopolitics as a theoretical frame for understanding debates over the gustatory value and health and safety of cheese and other perishable foods.[i] Calling attention to how dissent over how to live with microorganisms reflects disagreement about how humans ought to live with one other, microbiopolitics offers a way to frame questions of ethics and governance. The U.S. American revival of artisanal cheesemaking and rising enthusiasm for raw milk and raw-milk cheese exemplifies microbiopolitical negotiations between a hyper-hygienic regulatory order bent on taming nature through forceful eradication of microbial contaminants — a Pasteurian social order (as currently forwarded by the FDA) — and what I have called a post-Pasteurian alternative committed working in selective partnership with ambient microbes.
As Bruno Latour relates in The Pasteurization of France, in recognizing microbes as fully enmeshed in human social relations, early Pasteurians legitimated the hygienist’s right to be everywhere; once microbes can be revealed in the lab (Pasteurians continue to believe) they may be eradicated — only then will “pure” social relations be able to flourish. In contrast, post-Pasteurians move beyond an antiseptic attitude to embrace mold and bacteria as potential friends and allies. The post-Pasteurian ethos of today’s artisanal food cultures—recognizing microbes to be ubiquitous, necessary, and even (sometimes) tasty—is productive of modern craft knowledge and expanded notions of nutrition, and it produces a new vocabulary for thinking about conjunctures of cultural practice and agrarian environments, along the lines of what the French call terroir.
I want to be very clear: some bacteria and viruses make some people sick, something no food-maker wants to risk. Successful post-Pasteurian food-makers are never cavalier about pathogenic risk. Dairy farmers who trade in raw milk and cheesemakers who work with it are exceptionally careful about hygiene—they are not anti-Pasteurian. To the contrary, they work hard to distinguish between “good” and “bad” microorganisms and to harness the former as allies in vanquishing the latter. Post-Pasteurianism takes after Pasteurianism in taking hygiene seriously; it differs in being more discriminating.
Focused on the aggregate of national population, Pasteurian microbiopolitics has been criticized for taking a one-size-fits all approach to food safety, predicating regulation on industrial-scale production (relying on pasteurization or irradiation to kill pathogens presumed to be present owing to insanitary agricultural practices) and population-wide consumption (young raw-milk cheese is forbidden to all because it carries particular threat to immunocompromised and pregnant consumers). Post-Pasteurians counter that fresh milk is not inherently “dirty” and in need of pasteurization; contamination is a matter of human agricultural practice, it is not in the “nature” of milk. Moreover, many assert that the heterogeneity of the public in “public health” should not be reduced to its lowest common denominator; people are individuals. In other words, the post-Pasteurian position lobbies for socio-legal latitude that would permit potentially risky foods to be made and consumed safely by some, if not others.
I worry, though, that as enthusiasm for the beneficial agencies of microorganisms grows, underinformed enthusiasts may overestimate the power of “nature’s” microbial goodness.[ii] I fret even more when such a position is characterized—as I am beginning to see—in terms of “post-Pasteurianism.” Last year I discovered for sale on the Web t-shirts, bumper stickers, even maternity shirts and baby bibs emblazoned with a smiling microbe and the slogan, “I’m a Post Pasteurian.”
Descriptive copy explains, “What is a ‘Post Pasteurian’? A really smart person who understands that pasteurization kills all (yes, ALL) the good in food.”[iii] This is not how I defined “post-Pasteurian” in my 2008 article or 2013 book. For the record, I refuse the claim. Pasteurization does not “kill” all the good in food. The position putatively espoused by the t-shirt would pit a beneficent “nature” supernaturally enlivened by microorganisms against a power-greedy “culture” championed by regulatory overreach. But the natural-cultural reality is that milk and fermented foods such as cheese, yogurt, miso, and beer are multispecies muddles that resist such simplistic parsing.
There’s nothing essential about a food’s goodness. Humility is required to navigate (not necessarily manage, let alone steward) post-Pasteurian microbial ecologies.
By “microbiopolitics,” then, I mean to describe and analyze regimes of social management, both governmental and grassroots, which admit to the vital agencies of microbes, for good and bad. Including beneficial microbes like starter bacterial cultures and cheese mold — in addition to the harmful E. coli, Lysteria monocytogenes, and Micobacterium tuberculosis — in accounts of food politics extends the scaling of agro-food studies into the body, into the gastrointestinal. “Microbes connect us through diseases,” writes Latour, “but they also connect us, through our intestinal flora, to the very things we eat.”[iv] At the beginning of the twenty-first century, as it comes to light that 90 percent of what we think of as the human organism turns out to comprise microorganisms, the truism, “We are what we eat,” has never seemed more literal. One aim of my work has been to show how artisan food-makers carefully sort out microbial friends from foes, work (not faith) that produces the conditions through which a post-Pasteurian dieticity might safely emerge—for some if not others.
[i] See Heather Paxson, “Post-Pasteurian Cultures: The Microbiopolitics of Raw-Milk Cheese in the United States,” Cultural Anthropology 23(1): 15-47, 2008. And also Heather Paxson, The Life of Cheese: Crafting Food and Value in America. Berkeley: University of California Press, 2013.
[ii] See also Gareth Enticott, “Risking the Rural: Nature, Morality and the Consumption of Unpasteurized Milk,” Journal of Rural Studies 19(4): 411-424, 2003.
[iv] Bruno Latour, The Pasteurization of France. Cambridge: Harvard University Press, 1993, p. 37.
It’s been nearly four years since The Asthma Files (TAF) really took off (as a collaborative ethnographic project housed on an object-oriented platform). In that time our work has included system design and development, data collection, and lots of project coordination. All of this continues today; we’ve learned that the work of designing and building a digital archive is ongoing. By “we” I mean our “Installation Crew”, a collective of social scientists who have met almost every week for years. We’ve also had scores of students, graduate and undergraduates at a number of institutions, use TAF in their courses, through independent studies, and as a space to think through dissertations. In a highly distributed, long-term, ethnographic project like TAF, we’ve derived a number of modest findings from particular sites and studies; the trick is to make sense of the patterned mosaic emerging over time, which is challenging since the very tools we want to use as a window into our work — data visualization apps leveraging semantic tools, for example — are still being developed.
Given TAF’s structure — thematic filing cabinets where data and projects are organized — we have many small findings, related to specific projects. For example, in our most expansive project “Asthmatic Spaces”, comparisons of data produced by state agencies (health and environmental), have made various layers of knowledge gaps visible, spaces where certain types of data, in certain places, is not available (Frickel, 2009). Knowledge gaps can be produced by an array of factors, both within organizations and because of limited support for cross agency collaboration. Another focus of “Asthmatic Spaces” (which aims to compare the asthma epidemic in a half dozen cities in the U.S. and beyond) is to examine how asthma and air quality data are synced up (or not) and made usable across public, private, and nonprofit organizations.
In another project area, “Asthma Knowledges”, we’ve gained a better understanding of how researchers conceptualize asthma as a complex condition, and how this conceptualization has shifted over the last decade, based on emerging epigenetic research. In “Asthma Care” we’ve learned that many excellent asthma education programs have been developed and studied, yet only a fraction of these programs have been successfully implemented, such as in school settings. Our recent focus has been to figure out what factors are at play when programs are successful.
Below I offer three overarching observations, taken from what our “breakout teams” have learned working on various projects over the last few years:
*In the world of asthma research, data production is uneven in myriad ways. This is the case at multiple levels — seen in public health surveillance and our ability to track asthma nationally, as well as at the state and county level; as seen through big data, generated by epigenetic research; in the scale of air quality monitoring, which is conducted at the level of cities and zip codes rather than at neighborhood or street level. Uneven and fragmented data production is to be expected; as ethnographers, we’re interested in what this unevenness and fragmentation tells us about local infrastructure, environmental policy, and the state of health research. Statistics on asthma prevalence, hospitalizations, and medical visits are easy to come by in New York State and California, for example; experts on these data sets are readily found. In Texas and Tennessee, on the other hand, this kind of information is harder to come by; more work is involved in piecing together data narratives and finding people who can speak to the state of asthma locally. Given that most of what we know about asthma comes from studies conducted in major cities, where large, university-anchored medical systems help organize health infrastructure, we wonder what isn’t being learned about asthma and air quality in smaller cities, rural areas, and the suburbs; what does environmental health (and asthma specifically) look like beyond urban ecologies and communities? We find this particularly interesting given the centrality that place has for asthma as a disease condition and epidemic.
*Asthma research is incredibly diffuse and diverse. Part of the idea for The Asthma Files came from Kim Fortun and Mike Fortun’s work on a previous project where they perceived communication gaps between scientists who might otherwise collaborate (on asthma research). Thus, one of our project goals has been to document and characterize contemporary asthma studies, tracing connections made across research centers and disciplines. In the case of a complex and varied disease like asthma — a condition that looks slightly different from one person to the next and is likely produced by a wide composite of factors — the field of research is exponential, with studies that range from pharmaceutical effects and genetic shifts, to demographic groups, comorbidities, and environmental factors like air pollution, pesticides, and allergens. Admittedly, we’ve been slow to map out different research trajectories and clusters while we work to develop better visualization tools in PECE (see Erik Bigras’s February post on TAF’s platform).
What has been clear in our research, however, is that EPA and/or NIEHS-funded centers undertaking transdisciplinary environmental health research seem to advance collaboration and translation better than smaller scale studies. This suggests that government support is greatly needed in efforts to advance understanding of environmental health problems. Transdisciplinary research centers have the capacity to conduct studies with more participants, over longer periods of time, with more data points. Columbia University’s Center for Children’s Environmental Health provides a great example. Engaging scientists from a range of fields, CCCEH’s birth cohort study has tracked more than 700 mother-child pairs from two New York neighborhoods, collecting data on environmental exposures, child health and development. The Center’s most recent findings suggest that air pollution primes children for a cockroach allergy, which is a determinant of childhood asthma. CCCEH’s work has made substantial contributions to understandings of the complexity of environmental health, as seen in the above findings. Of course, these transdisciplinary centers, which require huge grants, are just one node in the larger field of asthma research. What we know from reviewing this larger field is that 1) most of what we know about asthma is based on studies conducted in major cities, 2) that studies on pharmaceuticals greatly outnumber studies on respiratory therapy; that studies on children outnumber studies on adults; that studies on women outnumber studies on men; and that many of the studies focused on how asthma is shaped by race and ethnicity focus on socioeconomic factors and structural violence; finally, 3) that over the last fifty years, advancements in inhaler technology mechanics and design has been limited in key ways, especially when compared to a broader field of medical devices.
*Given the contextual dimensions of environmental health, responses to asthma are shaped by local factors. What’s been most interesting in our collaborative work is to see what comes from comparing projects, programs, and infrastructure across different sites. What communities and organizations enact what kinds of programs to address the asthma epidemic? What resources and structures are needed to make environmental health work happen? Environmental health research of the scale conducted by CCCEH depends on a number of factors and resources — an available study population, institutional resources, an air monitoring network, and medical infrastructure, not to mention an award winning grassroots organization, WE-ACT for Environmental Justice. Infrastructure can be just as uneven and fragmented as the data collected, and the two are often linked: Despite countless studies that associate air pollution and asthma, less than half of all U.S. counties have monitors to track criteria pollutants. And although asthma education programs have been designed and studied for more than two decades now, implementation is uneven, even in the case of the American Lung Association’s long-standing Open Airways for Schools. This is not to say that asthma information and care isn’t standardized; many improvements have been made to standardize diagnosis and treatment in the last decade. Rather, it’s often the form that care takes that varies from place to place. One example of what has been a successful program is the Asthma and Allergy Foundation of America’s Breathmobile program. Piloted in California more than a decade ago, Breathmobiles serve hundreds of California schools each year and more than 5,000 kids. Not only are eleven Breathmobiles in operation in California, but the program has also been replicated in Phoenix, Baltimore, and Mobile, AL. Part of the program’s success in California can be attributed to the work of the state’s AAFA chapter, and partnerships with health organizations, like the University of Southern California and various medical centers. Importantly, California has historically been a leader in responses to environmental health problem.
As we continue our research, in various fieldsites, grow our archive, and implement new data visualization tools, we hope to expand on these findings and further synthesize from our collective work. And beyond what we’re learning about the asthma epidemic and environmental health in the U.S., we’ve also taken many lessons from our collaborative work, and the platform that organizes us.