Tag: ai

Gods, AIs, and Mormon Transhumanism

I’d like to start this post by juxtaposing two scenes. The first one is set about two years ago, and occurred on the third floor of the Provo, Utah Convention Center. The scene was very similar to the one found in any academic conference—down to the dais, the rows of chairs, and a screen designed for projecting PowerPoint slides. We were between presentations, and as often happens during moments such as these, I was spending the interstitial moments catching up with acquaintances I mostly know from on-line interactions. In the case at hand, the person I was talking to is in his late twenties or early thirties, someone who had recently left his position in the United States Marines; in fact, he had left the military so recently that he still had the short haircut associated with service. On his arm, he also had a tattoo of an electromagnetic equation. He is holding one hand out in front of my face as we talk. In normal conversations, a gesture like that would be rude, but this was not a normal conversation. This is because he was describing how, at very close distances, he can feel the presence of electricity with his hands. He had gained this ability through intervening in his physical body—biohacking—which I will return to shortly. (read more...)

Rule of Law by Machine? Not so Fast!

Editor’s Note: This is the sixth post in our Law in Computation series. Back in the mid-1990s when I was a graduate student, I “interned” at a parole office as part of my methods training in field research. In my first week, another intern—an undergraduate administration of justice student from a local college—trained me in how to complete pre-release reports for those men and women coming out of prison and entering onto parole supervision. The pre-release report was largely centered on a numeric evaluation of the future parolee’s risks and needs. The instrument used by the parole office was relatively crude, but it exemplified a trend in criminal justice that pits numbers-based tools, designed to predict and categorize system-involved subjects, against more intuitive judgments of legal actors in the system. (read more...)

Killer Robots: Algorithmic Warfare and Techno-Ethics

Editor’s Note: This is the fourth post in our Law in Computation series. War is an experiment in catastrophe; yet, policymakers today believe chance can be tamed and ‘ethical war’ waged by simply increasing battlefield technology, systematically removing human error/bias. How does an increasingly artificially intelligent battlefield reshape our capacity to think ethically in warfare (Schwartz, 2016)? Traditional ethics of war bases the justness of how one fights in war on the two principles of jus in bello (justice in fighting war) ethics: discrimination and proportionality, weighted against military necessity. Although how these categories apply in various wars has always been contested, the core assumption is that these categories are designed to be an ethics of practical judgment (see Brown, 2010) for decision-makers to weigh potentially grave consequences of civilian casualties against overall war aims. However, the algorithmic construction of terrorists has radically shifted the idea of who qualifies as a combatant in warfare. What then are the ethical implications for researchers and practitioners for a computational ethics of war? (read more...)

Human-Machine Interactions and the Coming Age of Autonomy

“Together we embark.” “Together we adjust.” “Together we drive.” These tag lines describe the Intelligent Driving System (IDS) concept car used in Nissan’s recent demonstration of possible futures in electric and autonomous driving. Unveiled by Nissan CEO Carlos Ghosn at the Tokyo Motor Show in 2015, the IDS concept car[1] suggests rich possibilities for future driving experiences. What I’m especially curious to explore as an anthropologist who has long been engaged in ethnographic and anthropological research in the context of technology development is how the seemingly dichotomous notions of “togetherness” and “autonomy” come together in advancing self-driving cars.  What visions of collectivity and sociality are at play amongst those involved in the development of self-driving cars, and how will the vehicles themselves embody these visions? My thoughts reflect my stance as a social analyst interested in socio-technical endeavors generally, and the social effects of automation specifically. It also reflects my vantage point as a collaborator in the process of autonomous vehicle (AV) development, as I will discuss. I’d like to consider two ways in which, to me, the notion of autonomy raises questions about notions of sociality. One way pertains to the vehicle itself, to visions for how the vehicle will function and look, and to the experiences it will enable. A second relates to the ways AVs are being brought into being. Here I am interested in how new social formations are emerging as people work together across previously distant and newly emerging industries, knowledge domains and practices. In what way do the activities involved in the development of autonomous vehicles suggest the rise of new global assemblages in which ideals of autonomy stand at the center of the reconfiguration of social relations? (read more...)

2014 in Review: Re-locating the Human

In retrospect, 2014 may appear a pivotal year for technological change. It was the year that “wearable” technologies began shifting from geek gadget to mass-market consumer good (including the announcement of the Apple Watch and the rising popularity of fitness trackers), that smartphone and tablet usage outstripped that of desktop PCs for accessing the Internet, along with concurrent interest in home automation and increasingly viable models for pervasive computing (such as Google’s purchase of smart thermostat Nest), and that computer algorithms, machine learning, and recommendation engines came increasingly to the fore of public awareness and debate (from Apple buying streaming service Beats to the effects of Facebook’s algorithms). Many of these shifts have been playing out world-wide, or at least, in diverse contexts, such as Chinese online retailer Alibaba going public and Xiaomi smartphone maker speedily surpassing most rivals. It also proved to be an exciting year on The CASTAC Blog, where our team of Associate Editors and contributors brought our attention to this rapidly shifting technological landscape, and to pressing questions and debates driving anthropological inquiry into science and technology. In today’s post, I continue my predecessor Patricia Lange’s tradition of reviewing themes and highlights on the blog from the past year. Some of these are topical, and included energy, the environment, and infrastructure, crowdsourcing and the “sharing” economy, wearables, algorithms and the “Internet of Things,” science communication, science’s publics, and citizen science, while others were more conceptual or even experimental—reflections on longterm ethnographic engagement with technology, broader issues of scientific (and ethnographic) authority, technological infrastructures as social infrastructures and tacit knowledges (such as Jenny Cool’s co-chair report), and broadly, how to make anthropological research into science and technology relevant within and beyond academic circles. (read more...)