In the media these days, Artificial Intelligence (henceforth AI) is making a comeback. Kevin Drum wrote a long piece for Mother Jones about what the rising power of intelligent programs might mean for the political economy of the United States: for jobs, capital-labor relations, and the welfare state. He worries that as computer programs become more intelligent day by day, they will be put to more and more uses by capital, thereby displacing white-collar labor. And while this may benefit both capital and labor in the long run, the transition could be long and difficult, especially for labor, and exacerbate the already-increasing inequality in the United States. (For more in the same vein, here’s Paul Krugman, Moshe Vardi, Noah Smith, and a non-bylined Economist piece.)
The clearest academic statement of this genre of writing is Erik Brynjolfsson and Andrew McAfee’s slim book Race Against The Machine. Brynjolfsson and McAfee suggest that the decline in employment and productivity in the United States starting from the 1970s (and exacerbated by our Great Recession of 2008) is a result of too much technological innovation, not too little: specifically, the development of more powerful computers. As they say, “computers are now doing many things that used to be the domain of people only. The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications.” To bolster their case, they introduce a variety of examples where programs are accomplishing intelligent tasks: machine translation, massive text discovery programs used by lawyers, IBM’s Watson and Deep Blue, the use of kiosks and vending machines and on and on. They end the book with a set of suggestions that would surprise no one who reads economists: more redistribution by the state, an education system that increases human capital by making humans partners with machines rather than their competitors, adopting a wait-and-see approach to the many new forms of collaboration that we see in this digital age (crowd-sourcing, the “sharing economy” etc.) and a few others.
Curiously though, Race Against the Machine has very little to say about the innards of this technology whose rise concerns the writers so much. How or why these programs work is simply not relevant to Brynjolfsson and McAfee’s argument. Usually, their go-to explanation is Moore’s Law. (Perhaps this is another way in which Moore’s Law has become a socio-technical entity.) To answer the question “why now,” they use a little fable (which they credit to Ray Kurzweil). If you take a chessboard and put 2 coins on the first square, 4 on the second, 16 on the 3rd and so on, the numbers start to get staggeringly large pretty soon. They suggest that this is what happened with AI. Earlier advances in computing power were too small to be noticed – but now we’ve passed a critical stage. Machines are increasingly going to do tasks that only humans have been able to in the past.
If, as an STS scholar, this explanation sounds a bit like hand-waving to you, I completely agree. How might one think about this new resurgence of AI if one thinks about it as a historian of technology or a technology studies scholar rather than as a neo-liberal economist? In the rest of this post, I want to offer a reading of the history of Artificial Intelligence that is slightly more complicated than “machines are growing more intelligent” or “computing power has increased exponentially leading to more intelligent programs.” I want to suggest that opening up the black-box of AI can be one way to think about what it means to be human in a world of software. And by “opening up the black box of AI,” I don’t mean some sort of straight-forward internalist reverse-engineering but rather, trying to understand the systems of production that these programs are a part of and why they came to be so. (In this post, I won’t be going into the political economic anxieties that animated many of the articles I linked to above although I think these are real and important – even if sometimes articulated in ways that we may not agree with.)
With that throat-clearing out of the way, let me get into the meat of my argument. Today, even computer scientists laugh about the hubris of early AI. Here is a document from 1966 about what from Seymour Papert calls his “summer vision project“:
The summer vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system. The particular task was chosen partly because it can be segmented into sub-problems which allow individuals to work independently and yet participate in the construction of a system complex enough to be real landmark in the development of “pattern recognition”.
One summer to create a system “complex enough to be a real landmark” in pattern recognition! Good luck with that!
Even back then, AI was not a static entity. Papert’s approach – which was the dominant one in the middle of the 1960s – was what Mikel Olazran calls “symbolic AI.” Essentially, in this approach, intelligence was seen to reside in the combination of algorithms, knowledge representation and the rules of symbolic logic. Expert systems that were built in the 1980s (and about whose “brittleness” Diana Forsythe has written memorably about) were the culmination of symbolic AI. These were built by AI experts going out and “eliciting knowledge” from “domain experts” through interviews (although as Forsythe points out, they would usually only interview one expert), and then crafting a series of intricate rules about what they thought these experts did.
The other approach was connectionism. Rather than linguistically-rendered rules constituting thought, connectionists suggested that thought came out of something that looked like a bunch of inter-connected neurons. This research drew extensively from the idea of the “perceptron” that Frank Rosenblatt conceived in the 1950s, and to the ideas of feedback in cybernetics. Perceptrons were discredited when Papert and Minsky, the leaders of the symbolic AI faction, showed that perceptrons could not model more than simple functions, and then revived in the 1980s with the invention of the back-propagation algorithm by Rumelhart and McLelland. As Olazaran shows, the perceptron controversy (i.e. the way perceptrons were discredited and then revived) is an excellent test-case for showing that science and technology in the making allow for a great deal of interpretive flexibility.
The revival of connectionism led to more and more interest in using statistical methods to do intelligent tasks. Today the name that computer scientists use for these methods is “machine learning” (or sometimes “data mining”). Indeed, it is machine learning that is at the heart of many of the programs discussed by Brynjolfsson and McAfee (and interestingly enough, the phrase does not appear at all in the book). The question of how this shift of interest to machine learning happened is open. Perhaps it was because funds for symbolic AI dried up; also, corporations had, by this time, started to increasingly use computers and had managed to collect a significant amount of data which they were eager to use in new ways. I, for one, am looking forward to Matthew Jones’ in-progress history of data mining.
The difference between machine learning/connectionist approaches and symbolic AI is that in the former, the “rules” for doing the intelligent task are statistical – expressed as a rather complicated mathematical function – rather than linguistic, and generated by the machine itself, as it is trained by humans using what is called “training data.” This is not to say that humans have no say about these rules. Indeed, many of the decisions about the kinds of statistical methods to use (SVMs vs. decision trees, etc.), the number of parameters in the classifying function, and of course, the “inputs” of the algorithm, are all decided by humans using contingent, yet situationally specific criteria. But while humans can “tune” the system, the bulk of the “learning” is accomplished by the statistical algorithm that “fits” a particular function to the training data.
|Expert systems as the culmination of symbolic AI. The “intelligence” of the system consists of rules that constitute the “inference engine” that embody the way the expert works.|
|In machine learning and connectionist approaches, the “intelligence” resides in a mathematical function that is learnt after feeding the algorithm “training” data.|
I want to suggest that the rise of machine learning was accompanied by three institutional changes in the way AI research was done: first that researchers stopped worrying about the generalizability of their algorithms and concentrated on specific tasks. Second, they concentrated their efforts on building infrastructures through which machine learning algorithms could be scaled and used as off-the-shelf components. And finally, they turned to networked computing itself to combine these machine learning algorithms with human actors. (The culmination of these is crowd-sourcing which is both a source of data for machine learning algorithms, as well as part of a scaled machine learning architecture to accomplish tasks using both humans and machines.)
AI practitioners today – in particular, machine learning researchers – are perfectly willing to be practical, to concentrate on particular domains rather than look for general purpose laws. For instance, consider this definition of machine learning given by Tom Mitchell (the author of a canonical textbook on the topic) in a piece written in 2006 – one of those pieces where the giants of the field try to set out the outstanding research problems for young researchers to solve.
To be more precise, we say that a machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.
Researchers concentrate on coming up with practical definitions of T, P and E that they can then proceed to solve. Back when I used to be a computer scientist designing algorithms to detect “concepts” in video, my adviser used to complain when IBM researchers designed classifiers to detect concepts like “Bill Clinton” or “Hillary Clinton.” His point was two-fold: first, that designing a classifier to detect “Bill Clinton” in a video was not good science because it in no way advanced the research agenda of detecting concepts in video. But, and second, IBM would do it anyway because it could (it had the resources) and because it would be useful.
Recently, Noam Chomsky criticized this tendency of AI researchers to use probabilistic models in computational linguistics. Probabilistic models were fine, Chomsky suggested, if all you wanted to do was make Google Translate work, but they were not real science, they were not causal. In response, Peter Norvig (co-author of the famous textbook on Artificial Intelligence and the director of research at Google) replied that probabilistic modeling works – and in a way that those elegant theories of the General Problem Solver never did. And Norvig goes even further in advocating that computer scientists not worry too much about generalized theories – because nature, he suggests, is far too complicated for that. (I really recommend reading Norvig’s response to Chomsky since it’s a kind of debate that computer scientists rarely have.)
The second factor is scale. Computer scientists have concentrated a lot of effort in these past few years to build infrastructures that make machine learning easier. This includes black-boxed, off-the-shelf, versions of these algorithms that can simply be trained and used. It also includes massive frameworks in which these algorithms can be coupled to each other and multiplied to build more and more complicated systems. So for instance, you connect a 1000 classifiers together (one for Bill Clinton, another for cars, etc.), and the result is a powerful system that can be used for “intelligent” tasks in video concept detection (“find all clips in videos where Bill Clinton is driven around” for e.g.). IBM’s Watson is exactly an example of such a massively scaled system – scaled to almost unimaginable levels solely so that it could match the human challengers in a particular task i.e. Jeopardy. In Watson there are classifiers whose output trains other classifiers and on and on.
The third factor is, I think, the most interesting. Machine learning researchers do not really expect their algorithms and programs to do all the work. Rather, they see these algorithms as working with humans, who might supervise them, tune them, provide them with data and interpret their results and so on. They seem to have taken to heart Terry Winograd’s final recommendation at the end of Understanding Computers and Cognition, in which he actively disavowed the symbolic AI that he had worked so hard to create, and instead suggested that computer scientists concentrate on designing useful assemblages of humans and computers rather than on creating intelligent programs. This is brought out beautifully in Alexis Madrigal’s recent story about Netflix’s recommendation algorithm. Briefly, Madrigal set out to find out how Netflix’s quirky genre system (Scary Movies Based on Books, Comedies Featuring a Strong Female Lead, etc., 76,897 micro-genres in all) is generated. The answer: through a combination of humans and machines (again, nothing new here for STS scholars). First, the Netflix engineers spent a few years creating recommendation engines using predictive algorithms that trained on the “ratings” that users provided before deciding that viewer ratings alone were not enough. They then started to employ people who “code” a movie (in terms of genre, etc.) based on a rigorous how-to manual that Netflix has created, especially for them. This movie-categorization was then combined with the ratings of Netflix viewers. By actively clustering both of these in some mathematical space, and then interpreting the clusters that result, they produced the quirky genres that Netflix has since become famous for. This is not a bad example of how AI is put to work today in systems of production. (It’s a pity that Madrigal focuses his attention so relentlessly on the specific recipe used to create the Netflix genres because his account offers fascinating glimpses of a whole host of behind-the-scenes work going on at Netflix).
These three factors – creating probabilistic machine learning classifiers for concepts without worrying about their generality, creating and using massive infrastructures to combine and mix-n-match machine learning algorithms, as well as combining the outputs of humans and machines in carefully human supervised ways – it seems to me that these explain better the revival of AI, as well as the reason why many people fear its political-economic implications. I will admit upfront that this is a mostly internalist story of what is happening and boils down to “Programs are being trained to do really particular tasks using data (perhaps specially created!) and then combinatorially combined in ad-hoc ways.” But I hope that it is a little more complicated than “computer power has increased by leaps and bounds” or “machines are getting more and more intelligent” (although neither of these are necessarily inaccurate). What I really hope is that it will help us think through the political economic implications of the new AI. And that, perhaps, is a subject best left for another day (given the length of this blog-post!).
As a longtime CASTAC member, I’d like to offer my take on where we’ve been and where we, as an organization might go in the future.
My first encounter with CASTAC came at the 1992 AAA meetings in San Francisco. I was a new grad student of Gary Downey’s in the STS program at Virginia Tech; however, CASTAC had been founded earlier. The following brief history is based primarily on “corridor talk,” oral histories passed along informally at AAA meetings and other fora by folks like David Hakken, Lucy Suchman, Julian Orr, David Hess and others.
CASTAC, as an organization, began as CAC (Committee for the Anthropology of Computing) at the initiation of David Hakken and a few other anthropologists who were pioneering anthropological studies of computing. David approached Marvin Harris who was, at that time, the President of the General Anthropology Division (GAD) about creating CAC as a Committee within GAD. Harris and the GAD board at the time supported the idea and CAC began its long association with GAD. CAC expanded to CASTC (and later modified to CASTAC) as anthropologists interested in the related areas of science, technology, medicine, work, and engineering joined the nascent group. The 1992 and 1993 AAA meetings were a coming out party with invited sessions that included both anthropologists and scholars from other fields like Donna Haraway and Susan Leigh Star, among others. During the same period, the same anthropologists were crashing the sociology-dominated 4S conference—a pattern recently emulated by the Science, Technology, and Medicine interest group within the Society for Medical Anthropology.
The 1990s were in many ways the high point of CASTAC activity. Sessions were organized at both the AAA and 4S meetings. CASTAC business meetings were always crowded and productive. The tragic death of Dianna Forsythe resulted in the Dianna Forsythe Prize celebrating her legacy and the work of anthropologists working on science, technology, and medicine. CASTAC held summer conferences at RPI and Columbia and CASTAC chairs were active participants at GAD board meetings. And the “science wars” raged in anthropology, STS, and the academy in general—halcyon days indeed.
I became chair of CASTAC in 2005 after a period of relative decline and inactivity during the early 2000s when CASTAC did little beyond award the Forsythe Prize. The summer conferences ended and CASTAC didn’t hold a business meeting at the AAA meetings for a number of years. I offered to serve as chair because I considered, and still consider, CASTAC to be my intellectual home within the AAA and wanted CASTAC to continue to serve as a place that mentored young scholars. Senior scholars in CASTAC have always been extremely generous with their time for junior scholars and I hoped this would continue.
The first challenge, aside from walking into the middle of a GAD board meeting immediately after being elected as CASTAC chair (I was the only volunteer to take the position), was to deal with an existential crisis. We broached the question of whether we thought CASTAC still served a purpose and ought to continue as an organization and, if so, in what form. There was discussion of merging with the Society for the Anthropology of Work (SAW), of forming our own section or independent interest group within the AAA, or of maintaining the current status of staying a committee within GAD. There were benefits to each organizational model and after extensive discussion on the listserv we voted to stick with GAD. GAD provided a $500 annual budget and required much less work to maintain the organization—all we needed was a chair to represent CASTAC on the GAD board and two representatives to serve on the Forsythe selection committee. However, CASTAC still had a problem—we weren’t exactly sure what we wanted CASTAC to do or be—a problem that we are still facing today.
When CASTAC began and for most of the 1990s, CASTAC was about the only place within AAA that folks working on the boundaries between anthropology and STS could go. But by the mid-2000s, STS was emergent in all kinds of places in anthropology. All kinds of anthropologists working in all kinds of areas like medical anthropology, environmental anthropology, media studies, development anthropology, linguistics, and even biological anthropology had discovered STS. And most of these folks had never heard of CASTAC and some were forming their own groups like the STM interest group in SMA. I saw the proliferation of STS-inspired ideas outside of CASTAC not as a threat to CASTAC but as an opportunity to develop collaborative relationships to enhance all of these groups.
I reached out to many of these groups and individuals to let them know that CASTAC existed and that we would love to work together with them to expand the visibility and influence of STS in anthropology and anthropology in STS. I worked with a number of CASTAC members, the GAD board, the STM interest group, and SAW to organize a series of prominent CASTAC invited sessions including a session celebrating the 10th anniversary of the Forsythe Prize. I also produced a new CASTAC Directory to facilitate collaboration among people working on related areas.
My term as CASTAC chair ended after four years and current co-chairs Jenny Cool and Rachel Prentice are leading CASTAC into the digital age of the internet and the blogosphere—ironic that it has taken CASTAC so long to create a strong presence here when the organization was founded by folks studying computing. I urge CASTAC to continue to remain open to new perspectives and new areas of anthropology that intersect with STS. Finally, and most significantly, I urge CASTAC to continue to be a place where senior scholars mentor junior scholars whose research interests, much like their own research interests, may be the proverbial squares that don’t quite fit into the circles of the traditional research areas within anthropology.
My special thanks to Patricia Lange for inviting me to contribute to the blog. I hope many of you will consider adding comments and your own posts to keep the CASTAC momentum moving forward. Participating in the blog has helped me realize that it is the longstanding collegial relationships that make CASTAC my anthropological home.
The CASTAC community joined together in 2012 to launch this blog and begin dialogue on contemporary issues and research approaches. Even though the blog is just getting off the ground, certain powerful themes are already emerging across different projects and areas of study. Key themes for the coming year include dealing with large data sets, connecting individual choices to larger economic forces, and translating the meaning of actions from different realms of experience.
Perhaps the most visible trend on our minds right now involves dealing with scale. How can anthropologists, ethnographers, and other STS scholars address large data sets and approaches in research and pedagogy, while also retaining an appropriate relationship to the theories and methods that have made our disciplines strong? As we look ahead to 2013, it would seem that a big question for the CASTAC community involves finding creative and ethical ways to deal with phenomena that range from the overwhelmingly large to the microscopic, in order to provide insight and serve our constituents in research and teaching.
Discussing large-scale forays into education and research
In the past two weeks in her posts on MOOCs in the Machine, Jordan Kraemer, our dedicated Web Producer, has been reflecting on how higher education is grappling with MOOCs, or “massive open online classes,” which open up opportunities to those who have been shut out of traditional elite institutions. At the same time, serious questions emerged about the ramifications of trade-offs between saving money and providing high-quality education. Kraemer points out that much of the debate ties into larger arguments about why it is that people have been shut out of education and how concentration of wealth and the neoliberalization of the university are challenging the old equation of supporting open-ended research that ultimately strengthens and supports teaching. She proposes new forms of graduate education in which recent graduates are supported by their universities with teaching jobs, to complete teaching experience, transfer teaching loads from full-time faculty, and support graduate students as they transition into full-time positions.
Part of the issue with MOOCs has to do with questions of scale, and how or whether individual lectures and course preparation can be generalized to large-scale audiences in ways that provide solid instruction without compromising quality. Higher-education depends upon staying current with research, and so far, we do not have enough evidence to support the idea that MOOCs will work or will address all of the concerns emerging from the neoliberalization of the academy. Those of us interested in online interaction and pedagogy will be watching this space closely in the coming year.
Questions of scale also came into play with Daniel Miller’s discussion of doing Eight Comparative Ethnographies. Miller argues that doing several ethnographies at the same time will enable comparative questions that are not possible when investigating one site alone. He provides an example from social network sites. He asks, to what extent are particular behaviors the product of a type of site, a single site, or the intersection of cultures in which a site is embedded? Is the behavior so because it is happening on Facebook or because the participants are Brazilian? A comparative study enables a level of analysis that is more inclusive than that derived from a single study. Expanding scale without compromising the traditions and benefits of ethnographic work remains a challenge for these and other large-scale projects in the future, which have the potential to provide crucial insights.
Making small-scale choices visible
As one set of researchers bring up issues with regard to enormously large-scale education and research, other STS participants on The CASTAC Blog are dealing with the opposite issue, which involves grappling with how the dynamics of extremely personal and individualistic acts—such as the donation of sex cells—interact with large-scale economic and cultural forces. In her post on The Medical Market for Eggs and Sperm, Rene Almeling, the winner of the 2012 Forsythe Prize, provides an inside look into how human beings’ donations of sex cells are connected to much larger economic forces that play out differently for women and men. Women are urged to regard egg donation as a feminine act of a gift; men are encouraged to see donation as a job. Almeling ties our understanding of what might be an individual act into economic forces, as well as gendered, cultural expectations about families and reproduction. Gendered framings of donation not only impact the individuals who provide genetic material, but also strongly influence the structure of the market for sex cells.
Another key issue on our minds has to do with dealing with personal responsibility and showing how individual choices impact much larger social and economic forces in finance, computing, and going green.
In his post, On Building Social Robustness, David Hakken raises the question of how individuals contributed to large-scale economic and social crises, such as the recent disasters in the world of finance. His project is informed by work that is trying to deal with the first “5,000 years” in the history of debt. He proposes developing a notion of social robustness, parallel to the idea of the technical notion of robustness in computer science.
His work provides an intriguing use of ideas from people whom we study, and applying them as an inspiration for making social change. When Hakken asks about the extent to which computing professionals are ethically responsible for the financial crisis, he is proposing a way of asking how a large-scale disaster can be traced to more individual, micro-units of action. By investigating these connections, his project informs a conversation that is increasingly picking up steam in the area of the anthropology of value.
Hakken’s reflections are especially haunting as he warns of the difficulties of building a career in anthropology and STS. As he is moving towards retirement, his perspective is especially valued in our community. As an antidote to more provincial institutional perspectives, he urges a more consolidated and community approach that involves supporting each other in doing the important work that the CASTAC community has the potential to achieve.
Questions of scale and responsibility are once again intertwined in David J. Hess’s post on Opening Political Opportunities for a Green Transition. Hess points out that a non-partisan political issue has become partisan despite the fact that the planet has now surpassed a carbon dioxide level that it has not had for at least 800,000 years! But because change is imperceptibly slow to the human eye, politics is allowed to complicate change. Hess has worked to investigate what he calls the “problem behind the problem,” which involves the lack of political will to address environmental sustainability and social fairness, which considerably worsens the environmental problem itself. He provides real solutions through an ambitious three-part series of books that propose “alternative pathways” or social movements centered on reform in part through the efforts of the private sector.
Notably, personal experiences in anthropology inform Hess’s work. Although he is in a sociology department and in an energy and environment institute, he points out that an anthropological sensibility continues to inform his thinking. While the discourse on these issues has traditionally revolved around a two party system, Hess’s more anthropological approach makes visible other ideologies such as localism and developmentalism that may pave a more direct path to “good green jobs” and a more sensitive and responsible green policy. Again interacting with questions of scale, Hess’s notions of responsibility are grounded in understanding the “broad contours” of the “tectonic shifts” of ideology and policy that are underway in working toward a green transition in the United States and around the world. Without real action, however, his prognoses remains pessimistic.
Translating phenomena across different realms of experience
A theme that also emerged from our nascent blog’s initial posts had to do with understanding the ramifications of processing one realm of experiencing by using metaphors and concepts from another. In her post on the Anthropological Investigations of MIME-NET, Lucy Suchman explores the darker side of entertainment and its relationship to military applications. She investigates how information and communication technologies have “intensified rather than dissipated” what theorists have described as the “fog of war.”
The problem is partly one of translation. How is it possible to maintain what military strategists call “situational awareness,” which has to do with maintaining a constant and accurate mental image of relevant tactical information. Suchman is studying activities such as The Flatworld Project, which bring together practitioners from the Hollywood film industry, gaming, and other models of immersive computing to understand these dynamics. Such a project also involves analyzing how such approaches “extend human capacities for action at a distance,” and present ethical challenges to researchers as they grapple with military realms and connecting seemingly disparate but interrelated areas such as war and healthcare.
Lisa Messeri’s post, Anthropology and Outer Space, offers an absolutely fascinating look into human conceptualization of place. She asks, why should earthlings be concerned about what is happening on Mars? Her work focuses on how “scientists transform planets from objects into places.” Significant milestones in space exploration such as the passing of Venus between the Earth and the Sun (not scheduled to do so again until 2117) and the landing of the Mars rover, Curiosity, provide rich areas to mine for understanding cultural notions of place and human exploration. Curiosity has its own Twitter account (!) and tweets freely about its experience of “springtime” in its southern hemisphere. Messeri argues that this kind of language “bridges” our worlds in that Curiosity somehow seems to experience something that is familiar to humans—springtime. Scientists are now studying things that are so far away that telescopes cannot take an image of them. Somehow, these “invisible” objects become familiar and complex. Planets begin to seem like places because of the way in which language “makes the strange familiar,” and bridges the experience between events on an exoplanet and life on Earth.
Astronomers become place makers, and observing these processes shows how spaces become “social” even as Messeri argues, “humans will never visit such planetary places.” Messeri shows how such conceptualizations can lead to the spread of erroneous scientific rumors that get reported on national news organizations. Her work shows not only how knowledge production is compromised by the use of such metaphors but also provides an intriguing look at how humans process invisible objects through the cultural production of imagined place.
Tune in next week!
Given that questions of scale were on our minds in 2012, it is especially fitting that we launch 2013 with a discussion about Big Data, and the challenges and opportunities that emerge when entities collect and combine huge data sets that are far too large to handle through ordinary coding schemes or desktop databases. Social scientists, technologists, and other researchers must grapple with numerous issues including legibility, data integrity, ethics, and usability. I am particularly pleased that David Hakken agreed to be interviewed by The CASTAC Blog to discuss his views. Next week, he provides fascinating insights into what the future holds for dealing with Big Data!
Before signing off, I would like to thank everyone for their participation in The CASTAC Blog, especially those who wrote posts, left comments, read articles, and tweeted our posts to the world. I very much appreciated everyone’s participation. The richness of the posts makes it too difficult to adequately cover all the content of the past year in one commentary, but rest assured that everyone’s post is contributing to the conversation and is valued by the CASTAC community.
In an effort to include more voices and keep a continuing flow of content, The CASTAC Blog is now seeking a core group of “frequent” contributors to keep pace with new developments in this space in 2013. Notice that I use the term “frequent” sparingly—even a few posts throughout the year makes you a frequent contributor. Please consider sharing your thoughts and views with the CASTAC community. If you would like to join in, please email me at: email@example.com.
I look forward to an interesting and productive year ahead!
Patricia G. Lange
The CASTAC Blog
Greetings! Welcome to the CASTAC Blog, an exchange for ideas and information about science and technology as social phenomena. We hope to build on a thriving community of scholars from around the world who are concerned about the implications of technologized products and worldviews that are impacting human beings and other forms of life. Our focus is interdisciplinary and welcoming to a variety of scholars interested in a diverse set of research issues, ethics, and impacts of technology on increasingly blended forms of humans and machines in contemporary life.
The CASTAC Blog was created by Patricia G. Lange, Jennifer Cool, and Jordan Kraemer, who are all members of the Committee on the Anthropology of Science, Technology, and Computing (CASTAC). CASTAC is a sub-committee of the General Anthropology Division (GAD) of the American Anthropological Association (AAA). For more than 20 years, CASTAC has had a thriving presence at AAA, as researchers have come together to exchange views about what it means to conduct anthropological research in technologized arenas. Sometimes the opportunities and challenges we face are very different, given that we research everything from nanotechnology to new media. In other instances, though, we face similar challenges, such as public perception of how we as researchers question the effects and processes of science, technology, and computing (the so-called “science wars”). Other challenges we often face include working in interdisciplinary terrain and receiving resistance from the academy or industry about our contributions. Many of us simply wish to geek out and connect with other people who are doing cool things in the intersection of anthropology and sociology and science and technology studies.
Our goal is to encourage dialogue—in a truly polyvocal space—on research findings, tools, new events, and social connections to others in this intersection of domains. We welcome contributions from interested parties within and outside of CASTAC to post about their research, contribute off-the-cuff comments or ask stimulating questions that can bring greater understanding to processes and products of humans’ engagement with technology.
Even writing a simple description of this domain presents challenges. The discipline of anthropology has exhibited a long-standing, anthropos-centric focus; but scholars within our community are already writing about the importance of microbes, biological organisms, and artificial life in ways that inevitably broaden the terrain of consideration of anthropological inquiry. Those of us engaged in research of new media have also pushed the boundaries of anthropological practice by following the “action” and going online to investigate new social formations that increasingly rely on mediated communication.
In an important way, we are all pushing the boundaries of anthropology. We wish to create a space where this kind of intellectual risk-taking is safe and welcome. I think we all realize that what we are doing today is, in fact, going to be the taken-for-granted anthropology of tomorrow. When I faced resistance from certain quarters about my dissertation project on MUDs (multi-user dimensions) and the social implications of tech talk, a wonderful mentor at the University of Michigan told me that I was ahead of my time and that my colleagues would catch up one day. We suspect many, if not all of us, have similar tales to tell, and the stakes are considerably higher in a number of technical and scientific areas. Happily, we have a community to hand that is ready and eager to hear what you have to say!
Consider this space a dialogue in progress that’s about promoting community both within and outside of CASTAC. We are interested in hearing many voices rather than gathering journal-ready copy. Scientific and humanistic insights need not be produced from the single peer-review journal path, as we all know. The backstage conversations often spark the big ideas, and provide much-needed support in challenging times.
Of course social spaces like this only work if everyone contributes in some way. After many years of being relatively quiet, CASTAC’s business meeting at AAA in the Fall of 2011 showed that many of us wish to continue to meet and keep a space alive for interacting with our colleagues in this space. We have devised The CASTAC Blog to accommodate different kinds of dialogue and contributions. We welcome submissions from all scholars in this area and people from different perspectives, including students, faculty, practitioners, policy makers, and other interested readers.
- People are encouraged to post about their initial observations, ongoing work, or research results in the Research section of the blog.
- Other people may wish to talk about methods or tools that they found useful or problematic in our Tools & Techniques section.
- Our section Beyond the Academy may be of particular interest to those who grapple with these issues in non-academic settings.
- We have also included a Member Sound Off section to encourage ideas about how the community can be improved, and to encourage personal statements of what it means to be part of a community like this.
What do you hope being part of this dialogue will bring? What can The CASTAC Blog, the organization of CASTAC or even other scholars in this space do to help you attain your goals? Join us!
COMING SOON! We have wonderful posts from Lucy Suchman, David Hakken, and David J. Hess in the pipeline, so stay tuned to the CASTAC Blog!
– Patricia G. Lange, Editor-in-Chief