Welcome to the CASTAC blog, the official blog of the Committee on the Anthropology of Science, Technology, and Computing!
This is the second half of my conversation with Dominic Boyer about the emergence of “infrastructure” as both ethnographic focus and analytic within anthropology. You can read the first part of the interview here!
Ian Lowrie: I’d like to circle back to the question of how infrastructure is related to politics and liberalism. There’s a recent article by Kim Fortun calling for a revitalized, engaged anthropology of not just infrastructure, but infrastructural expertise, in the context of precisely the degradation of the most visible aspects of our infrastructure. At the same time, I think we also see strong, robust development of other types of infrastructures. Things like technical arrangements, financial instruments, logistical services, the computational and digital. I wonder if part of what makes the urge to expand the concept of infrastructure to include things other than things like roads and sewers is a political urge.
Dominic Boyer: I think it is, and I think you’re right to point out that the story of infrastructure in the neoliberal heyday is not simply about abandonment. It’s a story of selective investment, and also of abandonment [laughs]. This is also the era in which informatic infrastructures, for example, develop. The Internet is one, but also the specialized information infrastructures that allowed finance to exert global realtime power that far exceeds the capacities of most governments to effectively regulate it. And that becomes a pivotal part of the story of the rebalancing of powers, I think, during the same time period. So the neoliberal era saw some remarkable infrastructural achievements in certain areas, whereas at the same time you might find your roads and your sewers decaying, which is interestingly often-times the focus of infrastructure studies. Most seem focused on what I would describe as basic biopolitical infrastructures and their fragmentation. A lot of research is, more or less latently, interrogating the aftermath of neoliberalism, specifically through the lens of biopolitical infrastructural decay. But you could tell a different story if you looked at different infrastructures. And maybe that’s a story that still needs to be told.
Ian: You’ve spent a lot of time thinking about social theory, often as ethnographic object, in your own work. I wonder, what do you think the role or function of social theory can be in this context? That is, in a post-neoliberal aporia, in which we find ourselves crushed by the immensity of the problems facing us, at both the ecological and political levels. What do you think the function of theoretical projects like investigating infrastructure is today?
Dominic: I think that there’s a great need for theoretical reinvention at this moment. This is a time in which we really do need to change the types of questions we’re asking, as well as I think to change the modes of communication and cooperation that we have. One thing that I find dismal is the extent to which many of the really fruitful and important voices in the anti-anthropocentric movement are sort of falling into the typical pattern that socialism did in the nineteenth century, factions fighting with each other for dominance, everyone asking: “Who’s got the better concept?” That’s still a very alpha-male sort relationship to theory, and I think that’s part of the impasse that we’re facing. I think we need to remember that the roots of the anti-anthropocentric turn are ultimately to be found in feminism and ecofeminism. Thus, there has to be a shift in the ethics, habits, and institutions that we use as theoretically minded scholars at this moment. Cooperation is so much more important than making sure that your concept work finds its own school and acolytes. The project should be to create reinforcing circles that can help not only clarify important theoretical problems, but find ways – creative ways – of changing how we get our messages out there. At CENHS, we’re thinking now about working with game designers, for example, because we’re not sure that a book or an academic article will have a sufficient impact. Necessary, yes, but not sufficient. We have to work with artists closely, we have to work with designers closely, we have to work with media professionals closely. The thing about the anthropocene is that as it becomes clearer and clearer that we can’t negate it, we can’t simply ignore it, that the project of survival is going to require new forms of cooperative work and cooperative thinking. That we need to radically transform our dependency upon the forms and magnitudes of energy we use is increasingly obvious. And we already have much of the technology we would need to accomplish that transition. But to get to the point of developing a sustainable infrastructure for a low carbon, low toxin modernity is a question of political will and of public understanding. Because it’s not going to involve easy fixes. So theoretical clarification can help, but also I think scholars we have to find ways to inform and inspire beyond our walls.
Ian: You’ve also just started work as part of the editorial collective for Cultural Anthropology, with Cymene [Howe] and Jim [Faubion]. And I wonder what you have planned for that journal as it transitions to open-access, online format, to make it a space for collaboration, rather than part of this broader technique of control that you elsewhere call digital liberalism?
Dominic: The biggest operational challenge we are facing is the open access transition. I don’t want to go into great detail about that because I think that much of that story is available elsewhere. The short version is that in the digital era, expensive-to-produce content can be reproduced with great simplicity, and that doesn’t fit well in a capitalist model, frankly. I think non-profit and low-profit academic publishing is the future. And the model has to be about sharing labor. If we’re all good about doing some small part of it competently and efficiently, this apparatus of a fairly large organization of cooperative labor can continue to put out a journal of high quality, but in a model that is truly non-profit. The thing is, by and large, we’ve found that people are pretty willing to participate as long as the apparatus is well- and fairly organized. We had a 98% success rate in inviting much esteemed, very busy people to join CA’s editorial board. That suggests to me that there’s not only an ability to do this, but there’s a great will out there, in the profession, to make this sort of a change. If we think of a journal as a cooperative project in which a large segment of the discipline are stakeholders, we can operate the journal without needing the for-profit apparatus of a publisher like Wiley. I think it’s the direction that all anthropology publishing should be moving toward. The reason that we’re putting quite a lot of time into the project at this early stage is to develop a sustainable infrastructure, a set of institutions, operations and principles, which could allow us to scale up open access and to carry it forward at the level of the discipline.
Ian: If the journal is an institution, it seems like the work that you’re describing is excellent evidence of what [Julia] Elyachar calls phatic labor, sociality as infrastructure, attendant to and around institutions.
Dominic Boyer: This raises questions of course, about, well, what is an infrastructure. Is it just what we used to call structure in the old structure and agency pairing? It feels to me that the connotations of “infrastructure” are more material than “structure” ever was. “Structure” belonged to the great heyday of mid-20th-century anthropology, culture theory, when this great explosion of creative semiological and hermeneutic analysis appeared. These were very much symbol-oriented, knowledge-oriented, thought-oriented models of construing what was distinctively human. It’s interesting to me that this synchronized more or less perfectly with the halcyon days of Keynesian modernity, with super-high growth rates across the Western world. My “Aha” moment came several years ago when I read Timothy Mitchell’s “Carbon Democracy” article. He argues that the only reason why those growth rates could stay so high was because oil was both practically and epistemically an inexhaustible, cheap-and-only-becoming-cheaper resource, and yet providing so much energy in a physico-material sense for all of the great institutions of modernity and all of its infrastructures, practices, habits, etc. For Mitchell, understanding and achieving “growth” is completely petropolitical. That understanding broke down with the formation of OPEC and the challenge to western control over the Middle East’s carbon fuel resources. And the Keynesian responses to the 1970s oil shocks fail because, in a sense, Keynesianism was carbon-based to its core, and couldn’t respond to a disruption of its carbon infrastructure or energopower.
Neoliberalism also didn’t have a good response to the crisis, but it certainly spoke a different language and could perform itself as an alternative. There was a cobbling-together of a sustainable carbon energy model over the ensuing decades. But now we realize that it’s not so much about problems of supply – with fracking and other developments, our energy demands seem to be covered. Peak oil will not save us anymore. Now we realize the problem is that carbon-fueled modernity is literally toxifying our environment, acidifying our oceans, and also creating climatological disruptions, now just tremors, but promising increasing levels of chaos and risk in the future. The problem is the anthropocene.
So the turn towards infrastructure, and the retreat from culture theory, for me are part of the same zeitgeist, the push-back against the preciousness of tarrying with semiotic systems in the era of what Anna Tsing has called “the damaged planet.” First we get the Marxian critiques – we get the Sid Mintzs and the Bill Roseberrys – scholars who argued, “Culture, sure. But what about power, what about domination?” That had its moment in the 60s and 70s, producing some amazing work, and then the vitality peters out, only returning again since 2008. I think that this hiatus had something to do with real-world Marxism becoming increasingly a more horrible, more obviously horrible, version of modernity than the liberal-capitalist modernity it was reacting against. But meanwhile, we’ve had first rumbles, and then storms gathering, marking the anti-anthropocentric turn, which brings us back from the semiological floating world, if I may be a little unkind, back to the concerns of the material, of the institutional, of the social, also in the posthuman sense of “social.” And so all the talk now about infrastructure seems to me like a kind of milestone on that path. As if to say: From here on out, at least for the foreseeable future, our theories of humanity and its presence and forms of life will have to take not only the symbolic and discursive seriously into account, but the material, and the technical, and the political. You can’t escape these questions any more, certainly not in the anthropocene.
Lately, anthropologists have been doing a lot of thinking about infrastructure. Although there have been anthropologists working on the large technical systems subtending modern sociality since at least the early 1970s, infrastructure today appears to be coming of age not only as a robust area of ethnographic engagement, but as a sturdy analytic in its own right, part of widespread resurgence of materialist thought across the humanities. As Brian Larkin puts it in his recent piece for the Annual Review of Anthropology, contemporary work in the anthropology of infrastructure attempts to understand how underlying material structures function to “generate the ambient environment of everyday life.” In so doing, the conceptual ambit of the term has been expanded beyond sewers, roads, and telecommunication systems to include everything from modes of sociality to economic instruments.
Recently, I spoke at some length with Dominic Boyer about the emergence and expansion of anthropological interest in infrastructure. Dominic has devoted considerable organizational and intellectual attention to thinking through the human aspects of energy infrastructures, both in his role as the director of Rice University’s Center for Energy and Environmental Research in the Human Sciences and in his own fieldwork, with Cymene Howe, on energopower and the renewables transition in Mexico. The first half of this conversation appears below, with the second to follow later this week.
Ian: Your current work with Cymene Howe, on the development of wind energy in Oaxaca, focuses quite explicitly on infrastructure in the most literal sense. I’m curious, however, whether there were precursors of this focus in your earlier work?
Dominic: Our project does focus on infrastructure in the sense that, early on, we realized that the electric grid and the utility that manages it in Mexico were going to be central actors in telling the story of the politics of renewable energy transition. But, really, infrastructure as analytic wasn’t really present to us as we were conceptualizing the research design. What’s interesting about the conversation around infrastructure to me is that it’s been a storm hovering on the horizon for a long while, and now the downpour has come and we’re all awash in infrastructure talk.
I remember vividly a few years ago, having come across some article on infrastructure, speaking with George Marcus, and I said “Infrastructure? Really?” And he said “Yeah! It’s going to be big.” Bellwether figures like George who are constantly circling the globe, ear to the ground, they hear the premonitions a little earlier. But in 2008-2009, when we were beginning our wind energy project, it wasn’t really there yet. We were certainly thinking about what I’ve articulated now as energopower, about fuel and electricity’s contribution to power, but the general rubric of infrastructure not so much.
Ian: If we’ve been studying traditional infrastructure like sewers under different terms for a while now, the analytic has certainly expanded greatly upon the common-sense definition. I’m curious, specifically at the level of epistemology, what you see the analytic of “infrastructure” as doing, productively or not, for us disciplinarily?
Dominic: My path to answering that question is a little idiosyncratic. I’m typically less interested in immersing myself in the latest analytic than in asking, “Isn’t it an interesting epistemic phenomenon that everyone is talking about infrastructure now? Why? Where is this coming from?” That is, to try to uncover something about the historicity of our analytic trends. So when I was asked, as a matter of not unpleasant surprise, but surprise nonetheless, to serve as a discussant on an infrastructure panel at last year’s AAAs, I had to pose exactly that question. Why is everyone going infrastructural? And I discovered that what these papers were doing in this panel was really interesting. If we tend to think of infrastructure as something that connotes a certain timelessness, then they showed us its temporality. If we imagine infrastructure as having material solidity, the papers showed us its flux and precarity.
The time I didn’t spend commenting on the papers, I spent ruminating on the question of “Why now?” And I had essentially three reflections. The first was that concern with what we are calling infrastructure, lower strata of experience, has a deeper [disciplinary] history. Not even to get into material culture, but just this idea of deeper strata that have consequences upon surface phenomena. Whether in a cognitive model, as for example with Lévi-Strauss and his structuralism, or in a more physico-material, Marxist investigation of how sub-structures condition the higher level structures of consciousness. I think that this sort of surfacing of hidden relations is a basic operation within anthropological knowledge–-we have a sort of revelational impulse more generally.
That led me to the second reflection, which is to say that today’s focus on infrastructure often appears as commentary upon infrastructural collapse, infrastructural decline and decay, which seems to me impossible to sever from our position at the end of a 30-year period of more-or-less unquestioned neoliberal hegemony, which has never taken public infrastructure, public services, as being very high-priority goods. In some sense we’re living today in the ruins of Keynesian infrastructure, and that seems to me to be at least in the background here, too. In part, infrastructure signals our relationship to the failure of Keynesianism, and now 30-40 years later, to an incipient failure of neoliberalism to really deliver on its own promises.
But the third reflection, which connects more closely to the work I’m doing now, was really the question of where do we go conceptually and theoretically in the anthropocene. I think that if you were to look at the genealogy of critical analytical projects in the human sciences since the 1970s, the Marxian turn got shut down pretty quickly just as actually-existing Marxisms turned into rubble, and neoliberalism became ascendant. But second-wave feminism and ecofeminism prove a lot more durable, much more difficult to silence, and fed directly into science studies via Haraway and others. Challenging the human subject-centered world was central to that project. I think science and technology studies more generally has a kind of intimate relationship with Keynesian technocracy. It’s critical – rage is too strong a word – but a critical position against the failure of the technocracy to continue to deliver on its promises of an ever-intensifying, -perfecting modernity. Instead after the 1970s as cracks in that apparatus multiply, analytic attention gets drawn to the constructedness of modern science and technology, and voila STS is born. And that decisive shift sets the stage in the 80s and 90s in turn for movements like posthumanism, and more recently, the New Materialism. They very much share in this anti-anthropocentric turn that begins, I think, first in the 1970s and only comes to fruition much later. And recognizing that theoretical trajectory means recognizing that we’re dealing with something very big here. This marks a real shift in the epistemic attentions of the human sciences. Certainly anthropology is part of it, but I really see this as a larger shift.
Ian: If the turn to infrastructure is part of the broader turn away from, or beyond, the human, how does the human figure in the anthropology of infrastructure specifically? Because some of the more radical instantiations of this trend seem to say “We’re done with it. We have all the resources to move beyond humanity as our principle object of inquiry.” Whereas we are disciplinarily and methodologically committed to taking humans as our objects, even if we have a theoretical commitment to moving beyond the human as the horizon of our analyses.
Dominic: Looking at what scholars are doing with infrastructure in anthropology, I think that the primary interest is in a certain class of systems. Grids, pipelines, roads, wiring; techno-politico-material assemblages that obviously involve humans at all different levels of planning, execution and use in fundamental ways, but which also allow scholars to comment on how distinctive forms of subjectivity and sociality emerge in combination with these assemblages. Now, I think you’re quite right that anthropology is loath to give up the human, and wisely so, but that doesn’t mean that we aren’t likely to see some very interesting experiments on the margins of the human. We already are. You look at the work in the burgeoning posthumanist anthropology of life, multispecies ethnography, and you can see how that sort of approach could easily be adapted to thinking about infrastructural questions. So I would expect that we’ll see everything from some pretty far-out, artful, posthuman experimental anthropologies of infrastructure – how do roads think? – all the way down to brutally empirical, realistic accounts of pipes, grids, etc.
Ian: It seems like in that bifurcation you also trace out two different types of skill necessary for people working on this. On the one hand, you need the sort of poetic knack necessary for capturing ephemeralities at the margins of the human, as you say. And then, on the other hand, the ability to rapidly develop familiarity with the technical system itself. I wonder whether you see a way to do both at once: to get the technical details right, but then to also have an evocative sense of what’s really going on out there beyond the clearing, in the Heideggerian sense.
Dominic: I think so. It seems to me that if we take the lessons from several decades’ engagement in science and technology studies seriously, from an anthropological point of view the projects that are among the most interesting are ones that have been able to operate on both those levels. Projects that have learned the science and technology to a degree that allows high-level translation from technical professions while at the same time retaining the appreciation for more humanistic forms of virtuosity. I’ll just come back to this revelational impulse. Maybe every field does this to some extent, but I feel that it’s very much something that anthropologists like to do — we love to present what you thought you knew about X, pull back a layer, and reveal some new twist on X. So there’s that kind of work we do constantly. Engaging intuitive frameworks of understanding, and jamming them to reveal deeper levels of complexity. With infrastructure it’s an interesting question because it’s already sort of de facto this consequential domain beyond visibility. That’s part of what makes it attractive and elusive, and so we’re burrowing beneath the surfaces of institutions, deeply into their walls and floors.
Earlier this month, I wrote about potential risks of sharing preliminary findings from the field, especially when they are related to a major social media company like Facebook. As anthropologist Daniel Miller discovered, doing anthropology in public carries the danger that journalists, bloggers, and others will pluck an ethnographic morsel from its context, and circulate it unmoored from those origins. Some news commentators, for example, reacted with panic to his contention that Facebook is “dead and buried” for some teen users in the UK. But if we don’t reach out to share our work, we equally risk provoking those who castigate academics for being too insular and our research too inaccessible. The debate about scholarly engagement in public resurfaced with renewed vigor last week (Just Publics @ 365 has a nice roundup) in response to New York Times’ columnist Nicholas Kristof’s piece “Professors, We Need You!” (Feb. 15, 2014).
I won’t spend much time rehashing the debate (as it were)—most of you are probably familiar by now with Kristof’s tired assertion that academics (“professors”) only write for specialized niche audiences and don’t pursue topics directly relevant to public life. But Kristof’s piece set off a wave of responses from academics who resolutely see themselves as engaged with non-academic audiences, and coalesced on Twitter around the hashtag #engagedacademics. In contrast to Kristof’s poorly informed views, a profusion of academic blogs, Twitter feeds, and news columns aim to relate scholarly work and insights to current topics of public interest—including this blog! So why didn’t Kristof realize this, and what should we do about—that is, we as academics, but also we as the public (or publics)?
Kristof was primarily targeting the quantitative social sciences, especially political science—as Kerim at Savage Minds points out, anthropology rarely appears in these debates over the relevance of the social sciences:
The thing is, anthropology is full of public intellectuals. You see anthropologists across all different forms of media, from leading newspapers to blogs, to local talk radio. You see anthropologists working on behalf of communities all around the world as well as working as bridges between communities. And you see anthropologists working daily with the large portion of the public that is in school, training the next generation of public intellectuals.
So, if that’s true, why does the discipline always seem to be in a crisis about the state of our public intellectuals? Why do we feel so marginal to public discourse? Why do we barely even get mentioned in debates like the one that erupted in response to a certain op-ed columnist?
Kerim offers three excellent suggestions for why we often feel marginal to these discussions despite a lively, ongoing tradition of public engagement in anthropology. First, anthropological methods ground us in the specificities of what we research, so we are more likely to be engaged in conversations about our particular topics and avoid generalizing. Two, we are often oriented towards publics very different from those many in the news media imagine. Three, anthropological (and other critical) perspectives call into question the entire capitalist enterprise that shapes dominant public discourse, making it difficult even to get the conversation on the same page. So what do we as anthropologists and STS scholars think about how to write for diverse audiences and how to make our research findings accessible? For example, how could public writing better fit into academic career trajectories, such as for tenure files? And what would more partnerships look like between qualitative social scientists and journalistic or policy endeavors?
Finally, as a footnote to this discussion, the New York Times published an editorial a day after the Kristof piece on the harm caused by adjunctification in higher education. I find it ironic, though not shocking, that the Times presents these contradictory viewpoints without much self-awareness, taking professors to task for insufficient public engagement with one hand, while acknowledging changes in the university that are making academic life increasingly precarious for the majority of professors with the other. Clearly these tensions need to be resolved before we can have a meaningful conversation about how social scientists should be engaged in public discourse.
In the media these days, Artificial Intelligence (henceforth AI) is making a comeback. Kevin Drum wrote a long piece for Mother Jones about what the rising power of intelligent programs might mean for the political economy of the United States: for jobs, capital-labor relations, and the welfare state. He worries that as computer programs become more intelligent day by day, they will be put to more and more uses by capital, thereby displacing white-collar labor. And while this may benefit both capital and labor in the long run, the transition could be long and difficult, especially for labor, and exacerbate the already-increasing inequality in the United States. (For more in the same vein, here’s Paul Krugman, Moshe Vardi, Noah Smith, and a non-bylined Economist piece.)
The clearest academic statement of this genre of writing is Erik Brynjolfsson and Andrew McAfee’s slim book Race Against The Machine. Brynjolfsson and McAfee suggest that the decline in employment and productivity in the United States starting from the 1970s (and exacerbated by our Great Recession of 2008) is a result of too much technological innovation, not too little: specifically, the development of more powerful computers. As they say, “computers are now doing many things that used to be the domain of people only. The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications.” To bolster their case, they introduce a variety of examples where programs are accomplishing intelligent tasks: machine translation, massive text discovery programs used by lawyers, IBM’s Watson and Deep Blue, the use of kiosks and vending machines and on and on. They end the book with a set of suggestions that would surprise no one who reads economists: more redistribution by the state, an education system that increases human capital by making humans partners with machines rather than their competitors, adopting a wait-and-see approach to the many new forms of collaboration that we see in this digital age (crowd-sourcing, the “sharing economy” etc.) and a few others.
Curiously though, Race Against the Machine has very little to say about the innards of this technology whose rise concerns the writers so much. How or why these programs work is simply not relevant to Brynjolfsson and McAfee’s argument. Usually, their go-to explanation is Moore’s Law. (Perhaps this is another way in which Moore’s Law has become a socio-technical entity.) To answer the question “why now,” they use a little fable (which they credit to Ray Kurzweil). If you take a chessboard and put 2 coins on the first square, 4 on the second, 16 on the 3rd and so on, the numbers start to get staggeringly large pretty soon. They suggest that this is what happened with AI. Earlier advances in computing power were too small to be noticed – but now we’ve passed a critical stage. Machines are increasingly going to do tasks that only humans have been able to in the past.
If, as an STS scholar, this explanation sounds a bit like hand-waving to you, I completely agree. How might one think about this new resurgence of AI if one thinks about it as a historian of technology or a technology studies scholar rather than as a neo-liberal economist? In the rest of this post, I want to offer a reading of the history of Artificial Intelligence that is slightly more complicated than “machines are growing more intelligent” or “computing power has increased exponentially leading to more intelligent programs.” I want to suggest that opening up the black-box of AI can be one way to think about what it means to be human in a world of software. And by “opening up the black box of AI,” I don’t mean some sort of straight-forward internalist reverse-engineering but rather, trying to understand the systems of production that these programs are a part of and why they came to be so. (In this post, I won’t be going into the political economic anxieties that animated many of the articles I linked to above although I think these are real and important – even if sometimes articulated in ways that we may not agree with.)
With that throat-clearing out of the way, let me get into the meat of my argument. Today, even computer scientists laugh about the hubris of early AI. Here is a document from 1966 about what from Seymour Papert calls his “summer vision project“:
The summer vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system. The particular task was chosen partly because it can be segmented into sub-problems which allow individuals to work independently and yet participate in the construction of a system complex enough to be real landmark in the development of “pattern recognition”.
One summer to create a system “complex enough to be a real landmark” in pattern recognition! Good luck with that!
Even back then, AI was not a static entity. Papert’s approach – which was the dominant one in the middle of the 1960s – was what Mikel Olazran calls “symbolic AI.” Essentially, in this approach, intelligence was seen to reside in the combination of algorithms, knowledge representation and the rules of symbolic logic. Expert systems that were built in the 1980s (and about whose “brittleness” Diana Forsythe has written memorably about) were the culmination of symbolic AI. These were built by AI experts going out and “eliciting knowledge” from “domain experts” through interviews (although as Forsythe points out, they would usually only interview one expert), and then crafting a series of intricate rules about what they thought these experts did.
The other approach was connectionism. Rather than linguistically-rendered rules constituting thought, connectionists suggested that thought came out of something that looked like a bunch of inter-connected neurons. This research drew extensively from the idea of the “perceptron” that Frank Rosenblatt conceived in the 1950s, and to the ideas of feedback in cybernetics. Perceptrons were discredited when Papert and Minsky, the leaders of the symbolic AI faction, showed that perceptrons could not model more than simple functions, and then revived in the 1980s with the invention of the back-propagation algorithm by Rumelhart and McLelland. As Olazaran shows, the perceptron controversy (i.e. the way perceptrons were discredited and then revived) is an excellent test-case for showing that science and technology in the making allow for a great deal of interpretive flexibility.
The revival of connectionism led to more and more interest in using statistical methods to do intelligent tasks. Today the name that computer scientists use for these methods is “machine learning” (or sometimes “data mining”). Indeed, it is machine learning that is at the heart of many of the programs discussed by Brynjolfsson and McAfee (and interestingly enough, the phrase does not appear at all in the book). The question of how this shift of interest to machine learning happened is open. Perhaps it was because funds for symbolic AI dried up; also, corporations had, by this time, started to increasingly use computers and had managed to collect a significant amount of data which they were eager to use in new ways. I, for one, am looking forward to Matthew Jones’ in-progress history of data mining.
The difference between machine learning/connectionist approaches and symbolic AI is that in the former, the “rules” for doing the intelligent task are statistical – expressed as a rather complicated mathematical function – rather than linguistic, and generated by the machine itself, as it is trained by humans using what is called “training data.” This is not to say that humans have no say about these rules. Indeed, many of the decisions about the kinds of statistical methods to use (SVMs vs. decision trees, etc.), the number of parameters in the classifying function, and of course, the “inputs” of the algorithm, are all decided by humans using contingent, yet situationally specific criteria. But while humans can “tune” the system, the bulk of the “learning” is accomplished by the statistical algorithm that “fits” a particular function to the training data.
|Expert systems as the culmination of symbolic AI. The “intelligence” of the system consists of rules that constitute the “inference engine” that embody the way the expert works.|
|In machine learning and connectionist approaches, the “intelligence” resides in a mathematical function that is learnt after feeding the algorithm “training” data.|
I want to suggest that the rise of machine learning was accompanied by three institutional changes in the way AI research was done: first that researchers stopped worrying about the generalizability of their algorithms and concentrated on specific tasks. Second, they concentrated their efforts on building infrastructures through which machine learning algorithms could be scaled and used as off-the-shelf components. And finally, they turned to networked computing itself to combine these machine learning algorithms with human actors. (The culmination of these is crowd-sourcing which is both a source of data for machine learning algorithms, as well as part of a scaled machine learning architecture to accomplish tasks using both humans and machines.)
AI practitioners today – in particular, machine learning researchers – are perfectly willing to be practical, to concentrate on particular domains rather than look for general purpose laws. For instance, consider this definition of machine learning given by Tom Mitchell (the author of a canonical textbook on the topic) in a piece written in 2006 – one of those pieces where the giants of the field try to set out the outstanding research problems for young researchers to solve.
To be more precise, we say that a machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.
Researchers concentrate on coming up with practical definitions of T, P and E that they can then proceed to solve. Back when I used to be a computer scientist designing algorithms to detect “concepts” in video, my adviser used to complain when IBM researchers designed classifiers to detect concepts like “Bill Clinton” or “Hillary Clinton.” His point was two-fold: first, that designing a classifier to detect “Bill Clinton” in a video was not good science because it in no way advanced the research agenda of detecting concepts in video. But, and second, IBM would do it anyway because it could (it had the resources) and because it would be useful.
Recently, Noam Chomsky criticized this tendency of AI researchers to use probabilistic models in computational linguistics. Probabilistic models were fine, Chomsky suggested, if all you wanted to do was make Google Translate work, but they were not real science, they were not causal. In response, Peter Norvig (co-author of the famous textbook on Artificial Intelligence and the director of research at Google) replied that probabilistic modeling works – and in a way that those elegant theories of the General Problem Solver never did. And Norvig goes even further in advocating that computer scientists not worry too much about generalized theories – because nature, he suggests, is far too complicated for that. (I really recommend reading Norvig’s response to Chomsky since it’s a kind of debate that computer scientists rarely have.)
The second factor is scale. Computer scientists have concentrated a lot of effort in these past few years to build infrastructures that make machine learning easier. This includes black-boxed, off-the-shelf, versions of these algorithms that can simply be trained and used. It also includes massive frameworks in which these algorithms can be coupled to each other and multiplied to build more and more complicated systems. So for instance, you connect a 1000 classifiers together (one for Bill Clinton, another for cars, etc.), and the result is a powerful system that can be used for “intelligent” tasks in video concept detection (“find all clips in videos where Bill Clinton is driven around” for e.g.). IBM’s Watson is exactly an example of such a massively scaled system – scaled to almost unimaginable levels solely so that it could match the human challengers in a particular task i.e. Jeopardy. In Watson there are classifiers whose output trains other classifiers and on and on.
The third factor is, I think, the most interesting. Machine learning researchers do not really expect their algorithms and programs to do all the work. Rather, they see these algorithms as working with humans, who might supervise them, tune them, provide them with data and interpret their results and so on. They seem to have taken to heart Terry Winograd’s final recommendation at the end of Understanding Computers and Cognition, in which he actively disavowed the symbolic AI that he had worked so hard to create, and instead suggested that computer scientists concentrate on designing useful assemblages of humans and computers rather than on creating intelligent programs. This is brought out beautifully in Alexis Madrigal’s recent story about Netflix’s recommendation algorithm. Briefly, Madrigal set out to find out how Netflix’s quirky genre system (Scary Movies Based on Books, Comedies Featuring a Strong Female Lead, etc., 76,897 micro-genres in all) is generated. The answer: through a combination of humans and machines (again, nothing new here for STS scholars). First, the Netflix engineers spent a few years creating recommendation engines using predictive algorithms that trained on the “ratings” that users provided before deciding that viewer ratings alone were not enough. They then started to employ people who “code” a movie (in terms of genre, etc.) based on a rigorous how-to manual that Netflix has created, especially for them. This movie-categorization was then combined with the ratings of Netflix viewers. By actively clustering both of these in some mathematical space, and then interpreting the clusters that result, they produced the quirky genres that Netflix has since become famous for. This is not a bad example of how AI is put to work today in systems of production. (It’s a pity that Madrigal focuses his attention so relentlessly on the specific recipe used to create the Netflix genres because his account offers fascinating glimpses of a whole host of behind-the-scenes work going on at Netflix).
These three factors – creating probabilistic machine learning classifiers for concepts without worrying about their generality, creating and using massive infrastructures to combine and mix-n-match machine learning algorithms, as well as combining the outputs of humans and machines in carefully human supervised ways – it seems to me that these explain better the revival of AI, as well as the reason why many people fear its political-economic implications. I will admit upfront that this is a mostly internalist story of what is happening and boils down to “Programs are being trained to do really particular tasks using data (perhaps specially created!) and then combinatorially combined in ad-hoc ways.” But I hope that it is a little more complicated than “computer power has increased by leaps and bounds” or “machines are getting more and more intelligent” (although neither of these are necessarily inaccurate). What I really hope is that it will help us think through the political economic implications of the new AI. And that, perhaps, is a subject best left for another day (given the length of this blog-post!).
At the 112th annual meeting of the American Anthropological Association last November, I was pleased to take the reins as co-chair of CASTAC alongside returning co-chair Jennifer Cool. I’d like to take this opportunity to thank my predecessor Rachel Prentice for all of her hard work in building our organization up to its current strength and numbers. In what follows, I’ll introduce myself and share some thoughts about CASTAC and its future.
I come to CASTAC and, more broadly, to science and technology studies via the study of sustainable development in non-urban spaces. My current project explores the intersection between renewable energy projects and ordinary life in a northern German village on the path to zero-sum living. Germany’s current “energy turn,” its transition from nuclear power to alternative energy sources, is transforming rural communities into sites of lucrative speculation, where capital investment and environmental politics take form around the technoscientific promise of renewables. In the two decades since the transition was coded into federal law, the village where I work has been terraformed by the installation of wind turbines, solar arrays and now biofuel processing technology. Practices that were already commonplace in the village (such as the harnessing of wind for land reclamation, the use of sun for heat or the use of biomass for fertilization) have been mutated and scaled up into engines of ecocapital (as wind turbines, solar panels, and biogas processing plants) at the same time that villagers have been recast as energy citizens who take part in the transition by recycling, installing solar panels or investing in wind parks or biofuel ventures.
In the midst of these events, my present work takes up one aspect of this process, namely the fluid technologies that congeal around technoscientific materials in ordinary life, and the repercussions these have for the public participation and social mobility amongst villagers. I theorize how energy comes to be known and re-membered in this space, both as a commodity and as a sense of animacy or vitality in everyday life, with an eye to the ways in which these understandings map onto class formations. As an ethnographer, I am particularly committed to articulating mundane sensory experience of renewable energy technologies, the feelingful registers through which energy takes form in everyday collisions of matter, and the ways in which energies teem and surge, becoming unruly in some contexts and “ruly” in others. This work continually challenges me to consider the implications of sensory ethnography and public culture studies for STS, and vice versa.
As I pursue my own research, I am reinvigorated by CASTAC members’ shared commitment to the ethnography of technoscientific worlds across human and nonhuman things. CASTAC was part of anthropology’s early forays into cultures of computing and digital technology. Today, we are experiencing a resurgence as emerging technologies bring new and unexpected ethnographic objects under the purview of STS. The CASTAC Blog is one valuable resource that has spun out of that resurgence, allowing anthropologists of STS to build bridges with diverse publics, including with STS scholars in other fields. CASTAC opens a space for dialogue with non-anthropologists as well as with anthropologists whose work speaks to our own, whether in political anthropology, environmental anthropology, feminist anthropology, queer anthropology, or other subfields. These connections are invaluable at a time when pressing social issues from income inequality to global warming compel novel approaches to technopolitical concerns. Additionally, university systems are in a state of transformation, where previously commonsense boundaries between academic and popular culture seem to be in flux, alternatively rigid and shifting, porous and impermeable. Situations such as these call for experimental approaches to making and sharing knowledge in a variety of settings, whether via methodological innovation, new ethnographic writing, and/or dialogue through multiple media. As the posts on this blog demonstrate, CASTAC members are committed to discussing their research in new and diverse venues, in hopes of collaboratively mapping new paths for action in an unsettled world. As a co-chair of CASTAC, I am pleased to help facilitate this important work, and welcome any questions or comments you may have regarding CASTAC and its projects in the days ahead.
This year the General Anthropology Division (GAD) welcomed Bruno Latour as its Distinguished Lecturer at the 112th Annual Meeting of the AAA. Latour’s talk, “What Is the Recommended Dose of Ontological Pluralism for a Safe Anthropological Diplomacy?” was recorded on video, presented here with my opening remarks.
Latour has been at Sciences Po Paris since 2007, first serving five years as Vice President of Research before returning to the faculty as Professor.
Latour’s work is as expansive as it is influential, crossing disciplinary boundaries from science and technology studies, to anthropology and archaeology, religion, architecture, and environmental studies as readily as the humans and objects Latour connects into large agential networks in his actor-network theory, or ANT. Professor Latour’s research began with his doctoral work on Biblical exegesis. He then moved to studies of science that brought ethnography into a scientific laboratory leading to his books Laboratory Life (1979), co-authored with Steve Woolgar, The Pasteurization of France (1988), and the widely influential Science In Action (1987).
Not satisfied with actor-network theory as it stood and the “social turn” of which ANT was a significant contributor in science and technology studies, Latour remade ANT in Reassembling the Social (2005). Arguing that “the social” and “society” represented the same kind of black boxes in the social sciences that the social turn in STS had attacked in the realm of technoscience, Latour proposed that “society” entails a concrete set of relationships that are reassembled in particular ways that vary across contexts like law, religion, and science.
What links Professor Latour’s studies of religion and science, is his ongoing interest in regimes of truth which led to his reworking of ANT in Reassembling the Social and to his studies of modernity in his book We Have Never Been Modern that challenges the reality of “modern” dichotomies that separate nature from culture and humans from things; and his most recent volume, An Inquiry Into Modes of Existence (2013).
Special thanks to Jennifer Cool for transcoding and uploading this video.
I am proud to say that The CASTAC Blog has become a truly impressive archive of scholarly and practical information for research, applied practice, and teaching. Last year the Blog saw a rich set of posts on research, pedagogy, and practice that may yield inspiration for student papers, future trends in scholarly articles, and cross-pollination of ideas for new research projects. Indeed, I encourage my anthropology of technology students to peruse the site for inspiration about current topics of interest in the STS community.
Of course, it is impossible to cover the contents of an entire year of material in a single report, but I would like to continue the yearly tradition of calling out a few themes that emerged across several posts. These themes include: nuanced ideas about performance; debates about intensive engagement with personal analytics; discussions about taken-for-granted, everyday infrastructures; and re-imaginings of the future of past waste. Interestingly, these themes are not isolated but have their own intersecting echoes and intellectual provocations.
Multiple forms of performance
Blogs provide a space for people to discuss and revisit different dimensions of influential theoretical constructs. Many scholars have ideas about what constitutes “performance,” and how that term applies in their own work. During the past year, it was interesting to see how that widely-applied word in anthropology, sociology, and STS connoted dramatically or subtly different meanings across various STS projects.
Last year, The CASTAC Blog saw several provocations and explorations of different dimensions of performance. In my own blog post on Performances of Technical Affiliation, I discussed how the performative is crucial for many technologists to function, in part by transmitting (intentionally and unintentionally) clues about their technical affiliations and beliefs. In some cases, however, I argue that such performances get in the way of knowledge exploration. I discussed how knowledge production becomes imbricated into performance. The idea was to open a space to expose those aspects of technical identity performance that challenge free and more democratic exploration of technologies.
In her popular post about American Fans of Japanese Popular Culture, Rebecca Carlson examined performances of “Japaneseness” in online spaces. She studied fans of Japanese popular media who learn to speak Japanese, study Japan’s history and culture, adopt the speech of anime characters, and even live and work in Japan. She explored the performative dimensions of Japaneseness online and concluded that online, “the dividing line between what is Japanese and what isn’t can become muddied.” Her post reiterates how transnational performativity challenges our understanding of the basic concept of ethnicity.
Tackling the issue of performance from a very different perspective, Chris Furlow’s post on Moving Beyond Doping Scandals moved beyond the specifics of newsy scandals and explored “the structures that both support and condemn a culture of doping.” He opened by pointing out certain scientific benchmarks for performance in terms of what the body can produce in the way of power produced by the body. But his main focus was to explore the metaphor of performance, and how the meaning of “enhanced performance” which carries positive connotations in many fields, is viewed quite negatively in sports such as cycling. He looks forward to exploring the metaphor of performance, and how the concept “is being contested and is useful for related debates about the nature/culture and human/machine boundaries.”
In her post On Being a “Natural” Human, Jamie Sherman offers a different take on performance in her discussion of “all natural” versus “drug” bodybuilders. Sherman’s work focuses on how men and women compete for the “ideal” body in seemingly similar yet different ways, targeting more muscle, less fat, and symmetrical proportions. In these milieus, personal analytics become crucial, as one must know one’s own body. Performance means exceeding one’s own limits and refusing artificial “glass ceilings” or other limiting boundaries. These explorations inevitably lead to questions about what constitutes a “human” and whether interventions constitute fakery or an extension of human capacity.
Better living through analytics?
Accumulating forms of personal “knowledge” discussed in several posts on performance was not limited to elite athletes and body builders. This past year saw much activity in uses of formal analytics in everyday life. The jury appears to be out on whether such intensive monitoring of daily activity will ultimately be helpful for everyone, but certainly several contributors to The CASTAC Blog discussed its benefits. Expect to see much more discussion about using analytical approaches both in research and methodology in the coming year.
In terms of methods, analytics have been proposed as useful for understanding the human condition. In his post on Ethnographic Analytics, Ken Riopelle states that using an emotional indicator that he calls the Positivity Index can reveal insights that may overcome the limitations of an ethnographer’s five senses when collecting data. His group’s NSF-funded, mixed-method study used software tools such as linguistic word counts to investigate how or whether engineering teams flourish when engaging in innovative projects. The analytics helped the research team not only glean material for future interviews and observations, but also yielded “a deeper understanding of the context surrounding” their interviewees’ work. More information is available in a recently released book: Advancing Ethnography in Corporate Environments: Challenges and Emerging Opportunities. Interested parties may also try out the method on the Linguistic Inquiry and Word Count website.
Personal analytics was also discussed in Dawn Nafus’s interesting post, The Quantified Self (QS) Movement is Not a Kleenex. QS participants numerically track their bodies using metrics such as heart rates that index qualitative aspects of activities such as sleep and exercise. Contrary to the popular assumption that QS participants merely fetishize or proselytize technologies, Nafus found that their practices are resisting “big data and big science dictums that make claims about the healthy body from on high.” Instead of passively going along with health guidelines in a one-size-fits-all paradigm, QS participants strive to deduce what practices are most beneficial to them as individuals. Even though pundits brand the QS movement as a generic “Kleenex” kind of brand, ethnographic investigation reveals that people embracing this paradigm are attempting to re-assert control over their own bodies in a form of resistance to Foucauldian, big-data-driven forms of biopolitical control.
In a delightful and richly informative post on getting Fit for Halloween, Elizabeth Churchill field tested the best apps for running away from Zombies! I confess I had an outdated view of zombies as slow and stumbling brain eaters, but she quickly explained that today’s zombies are often rage zombies who are fast and far more aggressive than their shuffling predecessors. What one learns from ethnography!
Churchill analyzed new devices and apps that take advantage of mobile lifestyles and ever-shrinking sensors to help people track their efforts to exercise, eat better, and get fit. Some of these devices even include post-apocalyptic story lines in which the runner must save citizens from aggressive zombie attacks. The ultimate goal is to use personal analytics in conjunction with punishment and reward behavior modifications to maintain more healthy lifestyle choices.
Practices such as “gamification” and “persuasive technology,” however, are not necessarily effective. Churchill argues that such approaches “have a reductive engineering bias that translates potentially complex behaviors into turn-key models that are easily implementable,” but are not necessarily sustainable in the long term. What are really needed are frameworks that offer a sense of fun that lead to more long-term motivational solutions. Sometimes that means running away from zombies!
Next year, look for indications that individuals are pushing back against top-down entities and questioning their role in larger infrastructures that shape everyday activities and needs. Several posts last year explored how individuals are questioning the design, implementation, and connotations of infrastructures that are often taken for granted in daily life.
In his thought-provoking post on What’s Up in the Cloud(s), Casey O’Donnell explores how underlying practices of what is now being called cloud computing are being enforced in ways that are “shifting our relationship with computers and software companies.” He explains that cloud “infrastructures” such as Amazon Web Services are comprised of connected computers and networks. In contrast, cloud-based services, such as Creative Cloud or Gmail are delivered via this infrastructure. Knowing that one is trading access to personal data for free services does not meant that one has any practical alternatives these days. Despite the propaganda that such services address users’ needs, O’Donnell reiterates the fact that such structures are designed to meet corporate needs in ways that encourage handing over one’s data, often for unknown and possibly personally deleterious uses.
Concerns over infrastructural parameters also emerged in a post I wrote called Killing Comments. Popular Science announced that it was removing comments from its website, citing potentially negative influences of comments on the reception of scientific findings. They suggest that people take their commentary to social media. Of course, most social media sites are not exactly havens for protecting participatory rights. Google also announced new comment procedures on YouTube that are meant to identify and highlight “relevant” comments, where “relevant” is defined by Weberian bureaucratic entities that implement such curation policies for an entity’s financial gain. These so-called participatory infrastructures are leading to a Brave New World of Web 1.0 (if you’ll excuse that term), and thus is a trend to watch closely and actively push back against in the coming year.
In his absolutely spell-binding post about being Off the Grid in the Modern World, Phillip Vannini explores what it really means to reject a deeply-imbedded infrastructure that provides power and energy resources. Vannini quickly distinguishes between casual claims of going off the grid by turning off one’s cell phone for the week-end to more deep commitments in which a household or entire community chooses to exist in a “state of disconnection from the electricity and natural gas infrastructure servicing a region.”
Many “off-gridders” seek independence and achieve a sense of self-reliance and self-sufficiency that literally provides a sense of individual power in their own homes. Living off grid is hard work, and demands many skills and responsibilities to make the challenges seem bearable. Off-gridders’ do not just protest top-down infrastructures; they enact lifestyle philosophies that show that infrastructural acceptance is not always the only option. Linking his comments to O’Donnell’s, one wonders how it might be possible to go off-grid in other ways, such as coming down from the cloud(s)?
In the coming year, I suspect we will be exposed to a lot of garbage—and rightly so. Generation of waste seems out of control, and for too long the infrastructure of waste disposal has remained hidden in the scholarly shadows. Material culture researchers might argue that fantasies of zero-emissions are not often plausible, and societies need to deal collectively with waste that is being produced at exponential rates.
Robin Nagle’s post on The Many Mysteries of MSW shows how the mundane task of garbage disposal is absolutely vital to the functioning of a city. Garbage disposal is fundamental not just to capitalism, but to basic public health. Yet it remains stigmatized and underappreciated in classist ways. Despite the common assumption that garbage disposal is a safer occupation than police work or fire fighting, Nagle points out that according to the Bureau of Labor Statistics, “A sanitation worker is many times more likely than a police officer or a firefighter to be injured or killed on the job.” Loss of limb or life can come from operating machinery, passing traffic, or projectiles of dangerous objects shooting from the hopper blade in the back of a garbage truck. Nagel’s fascinating research shows how a mundane activity that is often stigmatized is not so by sanitation workers, and increasingly, is becoming appreciated as an important area of technological focus by anthropologists who understand the subject as a “scholarly treasure.”
Treasure in the form of junk also finds new life among steampunk artisans and inventors. According to Matt Hale’s post, Steampunk: Reimagining Trash and Technology, steampunks scour junk stores and yard sales for gadgetry that finds new life in the form of costumes, jewelry, accessories, and clever new gadgets. Inspired by the steam-powered mechanical science fiction of Jules Verne and the contemporary do-it-yourself (DIY) ethos, steampunks argue that humans no longer have a meaningful relationship with technology. Devices are cheap, sterile, and disposable, thus promoting transient and superficial relationships between humans and technological devices.
Humans hardly know how things work. In response, steampunks engage in processes of “upcycling” which refers to, “converting discarded material resources into new forms with the intention that the consequent products be of a higher and more sustainable quality than they had been in their previous iteration(s).” Steampunks transform junk into useful art that turns waste into a means of production. In reading Hale’s post, it would seem that the steampunk movement is another way of reacting to technological infrastructures that promote a one-size-fits all mode, one that addresses the needs of the entity of production rather than the individual user. In this sense, steampunks are also attempting to personalize mass-produced instantiations of technical ideologies.
Looking forward to 2014
These topics are but a small fraction of the content that has graced the pages of The CASTAC Blog last year. They hint at many interesting trends to follow and we look forward to hearing about the issues and topics that interest you in the coming year.
As many of you know, The CASTAC Blog is expanding. Due to the overwhelming support and intellectual interest in the blog, we were delighted to announce a new team that includes several new Associate Editors and a new Associate Web Producer who will be bringing new and expanded content to the blog this year. For more details, please check out last week’s post welcoming our new team. Feel free to contact the appropriate Associate Editor if there is content you would like to see in their area of expertise, or if you would like to contribute a post about your own work.
We look forward to hearing what you have to say in the coming months!
Over the last two years I have been conducting research into amateur biology in and around Silicon Valley. During that time, I have worked as a volunteer in a DIYBio lab and on a pair of laboratory projects, one an unlikely precursor to the Glowing Plant project and another which fell into the dust bin of scientific history. Which is to say, for every project that captures media attention and attracts funding like Glowing Plant, there is an equally interesting project struggling to generate interest and find collaborators. With that in mind, I want to discuss some of the tensions within DIYbio laid bare by success of the Glowing Plant Kickstarter campaign.
Glowing Plant needs little introduction, as it has been the subject of sustained media attention since the Kickstarter campaign it launched early this summer. To give a brief technical overview, Glowing Plant intends to take a luciferin system from the marine bacteria Vibrio Fischeri, which is found in squid, and place it into an Arabidopsis plant, thus causing the plant to bioluminesce at night. As outlined on their Kickstarter page, to effect this transformation they plan to use the software program Gene Compiler to design DNA sequences then have these sequences laser printed by Cambrian Genomics. Once printed, the DNA will be sent to the Glowing Plant laboratory via Fed-Ex (the standard way to move DNA around the world) and then transformed into the Arabidopsis plant via agrobacterium. This portion of the project is regulated by material transfer agreements governing the movement of recombinant DNA between laboratories and, due to the use of agrobacterium, no plants transformed in this manner can leave the Glowing Plant laboratory. However, once the plants transformed with agrobacterium are deemed to glow enough, a final transformation will be done with a gene gun. It is the use of the gene gun that allows Glowing Plant to ship a genetically modified organism (GMO) to consumers without regulatory oversight. Here Glowing Plants follows a strategy pioneered by Monsanto to market GM bluegrass.
To date, most commentary around the Glowing Plant project has concerned the ethics of biotechnology in general and the specific ethical implications of using biotechnology to make a consumer product in particular. But as the Monsanto case illustrates, this is not a strategy unique to Glowing Plant. Though, Glowing Plant is an easier legal target as they lack Monsanto’s war chest. But, I would like to emphasize a different aspect of the project: the role of hype in the crowdfunding campaign and the claims made by the organizers for the ease and effectiveness of this technology. Keep in mind that while much of the critical commentary on both pro and con has assumed this plant can be created (it is a greater technical challenge than making bluegrass), the plant is still hypothetical and its creation is not a certainty. Apart from the still hypothetical existence of the plant, there is speculation, due to the energetic requirements of light production, over whether the transformation can produce a light visible to the naked eye.
As I hinted at above, despite starting out in DIYBio lab, the Glowing Plant kickstarter campaign was sponsored and supported by a number of startup companies associated with Singularity University, and it maintains close ties to its startup ecosystem. The core team comprising the Glowing Plant project consists of one Stanford trained PhD conducting laboratory work, one Stanford post-doc, and a former Bain & Company consultant. Further, prior to receiving funding Glowing Plant hired a digital marketing firm to manage its Kickstarter and PR campaign. Though often portrayed as the product of the wisdom of the crowd, the pump was primed well before the Kickstarter campaign was underway. In this sense, Glowing Plant follows the classic Silicon Valley pattern of the well-supported disruptive startup pioneering a new market, consumer synthetic biology in this case, by forcing a product through a regulatory grey area.
Unsurprisingly, Glowing Plant has been the subject of much angst within DIYBio, both over the manner of the project’s exit from the DIYBio laboratory where the idea was hatched and the preliminary lab work conducted, and over the project’s aggressive approach to regulation. Most DIYBio laboratories ask for no monetary commitment beyond individual membership dues or class fees and struggle financially as membership often ebbs and flows. In this environment, the departure of a popular and compelling project from a lab creates a difficult gap to fill. Glowing Plant’s approach to regulation and blanket claims of environmental safety also threaten to bring regulatory scrutiny to DIYBio as a whole, which to date has been a modest and self-regulating field. Already, Kickstarter has issued a category ban on GMOs, and Glowing Plant has moved from working at a public DIYBio laboratory into a private laboratory at an undisclosed location.
All of which raises the question posed by the title: What is the relation between Glowing Plant’s ambitions and DIYBio’s oft-stated goals and code of ethics? The rhetorical appeal made in Glowing Plant’s Kickstarter video is not to the ideals or ethics of DIYBio, but rather to the frontier, which indexes an entirely different and distinctly American set of ideals. As co-founder Anthony Evans says in the Glowing Plant kickstarter video, “our generation’s frontier is synthetic biology, our guide nature itself.” Where the DIYBio code of ethics draws on commonalities across distal polities of amateur biologists to urge self-regulation, moderation, and action in the common interest of amateur inquiry, Evans’ appeal to the frontier connotes an unregulated individualism coupled to the exploitation of a peculiarly American approach to regulation, in which an action not explicitly illegal is implicitly allowed.