Tag: artificial intelligence

Dumbwaiters and Smartphones: The Responsibility of Intelligence

“I don’t have to drink alone,” she paused for comedic effect, “now that I have Alexa.” Thus was the punchline of a story told by a widowed octogenarian at a recent wedding. Alexa is a mass-produced personality that can play music, suggest items for purchase, monitor consumption and health habits, or, like any good friend, just listen. While all these tasks could be performed in silence with various algorithmic appliances, Alexa and her cousins from Google and Apple are imbued with a perceived autonomy directly stemming from their capacity for vocalization. Speech, it seems, beckons the liberation of abiotic materials from their machinic programming. (read more...)

Gods, AIs, and Mormon Transhumanism

I’d like to start this post by juxtaposing two scenes. The first one is set about two years ago, and occurred on the third floor of the Provo, Utah Convention Center. The scene was very similar to the one found in any academic conference—down to the dais, the rows of chairs, and a screen designed for projecting PowerPoint slides. We were between presentations, and as often happens during moments such as these, I was spending the interstitial moments catching up with acquaintances I mostly know from on-line interactions. In the case at hand, the person I was talking to is in his late twenties or early thirties, someone who had recently left his position in the United States Marines; in fact, he had left the military so recently that he still had the short haircut associated with service. On his arm, he also had a tattoo of an electromagnetic equation. He is holding one hand out in front of my face as we talk. In normal conversations, a gesture like that would be rude, but this was not a normal conversation. This is because he was describing how, at very close distances, he can feel the presence of electricity with his hands. He had gained this ability through intervening in his physical body—biohacking—which I will return to shortly. (read more...)

A Feeling for Robotic Skin

“I hate it when her eyes flick open. This one, her eyeballs sometimes go in different directions and it looks creepy.” Catherine reached over, gently pulling the doll’s eyelids down with one extended finger. The doll, Nova, was petite, shorter than Catherine and I, and had jet black hair and brown skin. She had been clothed in a white unitard suit. Nova was flanked by other dolls of various heights and skin tones, to her right a white-skinned brunette, also a “petite” model, and across from her, a life-size replica of the porn actor Asa Akira. Henry, a newer model, stood beside Nova. He had been dressed in what might be described as office-kink-lite, a white button-down shirt and black slacks, along with black fingernail paint and a leather wrist cuff. His skull was cracked open to reveal the circuitry inside. Henry, Nova, Asa’s doll, and the others were standing propped in the display lobby of a sex doll manufacturer’s headquarters in southern California. I visited the headquarters in the spring of 2018 to meet the dolls and their makers. Catherine, the company’s media liaison, generously offered me a tour of the display showroom and production site. (read more...)

Killer Robots: Algorithmic Warfare and Techno-Ethics

Editor’s Note: This is the fourth post in our Law in Computation series. War is an experiment in catastrophe; yet, policymakers today believe chance can be tamed and ‘ethical war’ waged by simply increasing battlefield technology, systematically removing human error/bias. How does an increasingly artificially intelligent battlefield reshape our capacity to think ethically in warfare (Schwartz, 2016)? Traditional ethics of war bases the justness of how one fights in war on the two principles of jus in bello (justice in fighting war) ethics: discrimination and proportionality, weighted against military necessity. Although how these categories apply in various wars has always been contested, the core assumption is that these categories are designed to be an ethics of practical judgment (see Brown, 2010) for decision-makers to weigh potentially grave consequences of civilian casualties against overall war aims. However, the algorithmic construction of terrorists has radically shifted the idea of who qualifies as a combatant in warfare. What then are the ethical implications for researchers and practitioners for a computational ethics of war? (read more...)

From Law in Action to Law in Computation: Preparing PhD Students for Technology, Law and Society

Editor’s Note: This is the inaugural post for the Law in Computation series, a collection of blog posts from faculty and graduate student fellows at UC Irvine’s Technology, Law and Society Institute. Leading up to a summer institute in 2018, the series provides examples of research and thinking from this interdisciplinary group and elaborates how sociolegal scholars might address new computing technologies, like artificial intelligence, blockchain, machine learning, autonomous vehicles, and more.  In 2015, a robot buying illicit items off the “dark web” was confiscated by the Swiss authorities along with its haul of Ecstasy pills, a Hungarian passport, counterfeit designer clothing, and other items. Dubbed Random Darknet Shopper it was a bot programmed to shop on the dark web using Bitcoin, the pseudo-anonymous cryptocurrency that, at the time of my writing, is experiencing an enormous bubble. Previously assumed to be the domain of criminals or drug dealers, the Bitcoin bubble has made it more mainstream, even on popular television shows like The Daily Show and is being discussed at policy forums worldwide. It increased in value from just over $1000 to over $8000 between February 2017 and February 2018, with a peak at over $19,000 in mid-December 2017. While it was pretty obscure just a few months ago, you probably have a cousin or uncle currently “mining” Bitcoin or trading in similar digital tokens whether you know it or not. (read more...)

Automation and Heteromation: The Future (and Present) of Labor

Editor’s note: This is a co-authored post by Bonnie Nardi and Hamid Ekbia. For the last several years, we have tried to understand how digital technology is changing labor. Of all the alleged causes of disruptions and changes in employment and work—immigrants, free trade, and technology—the last one has received the most extensive debate lately. We review the debate briefly and then discuss our research and how it bears on the questions the debate raises. (read more...)

Becoming More Capable

Editor’s Note: This is the second post in the series on Disabling Technologies “We need to exercise the imagination in order to elbow away at the conditions of im/possibility.” Ingunn Moser & John Law (1999, 174) What is it to be capable? How might we elbow away the conditions that limit ability, to become more capable? In this short piece, I take seriously Rebekah’s invitation to account for “different ways of doing, acting, and living in the world.” The anthropological imperative to “take into account difference” and consider how objects “intersect with social worlds, imaginaries and emergent social practices” speaks to my ongoing efforts to engage with the long and troubled relationship between technology and dis/ability. Specifically, it resonates with my work that asks what, if anything, artificial intelligence (AI) might offer the blind and vision impaired. (read more...)

How (Not) to Talk about AI

Most CASTAC readers familiar with science and technology studies (STS) have probably had conversations with friends—especially friends who are scientists or engineers—that go something like this:  Your friend says that artificial intelligence (AI) is on its way, whether we want it or not.  Programs (or robots, take your pick) will be able to do a lot of tasks that, until now, have always needed humans.  You argue that it’s not so simple; that what we’re seeing is as much a triumph of re-arranging the world as it is of technological innovation. From your point of view, a world of ubiquitous software is being created; which draws on contingent, flexible, just-in-time, human labor; with pervasive interfaces between humans and programs that make one available to the other immediately. Your comments almost always get misinterpreted as a statement that the programs themselves are not really intelligent.  Is that what you believe, your friend asks?  How do you explain all those amazing robot videos then?  “No, no,” you admit, “I am not saying there’s no technological innovation, but it’s complicated, you know.”  Sometimes, at this point, it’s best to end the conversation and move on to other matters. (read more...)