Tag: artificial intelligence

A Feeling for Robotic Skin

“I hate it when her eyes flick open. This one, her eyeballs sometimes go in different directions and it looks creepy.” Catherine reached over, gently pulling the doll’s eyelids down with one extended finger. The doll, Nova, was petite, shorter than Catherine and I, and had jet black hair and brown skin. She had been clothed in a white unitard suit. Nova was flanked by other dolls of various heights and skin tones, to her right a white-skinned brunette, also a “petite” model, and across from her, a life-size replica of the porn actor Asa Akira. Henry, a newer model, stood beside Nova. He had been dressed in what might be described as office-kink-lite, a white button-down shirt and black slacks, along with black fingernail paint and a leather wrist cuff. His skull was cracked open to reveal the circuitry inside. Henry, Nova, Asa’s doll, and the others were standing propped in the display lobby of a sex doll manufacturer’s headquarters in southern California. I visited the headquarters in the spring of 2018 to meet the dolls and their makers. Catherine, the company’s media liaison, generously offered me a tour of the display showroom and production site. (read more...)

Killer Robots: Algorithmic Warfare and Techno-Ethics

Editor’s Note: This is the fourth post in our Law in Computation series. War is an experiment in catastrophe; yet, policymakers today believe chance can be tamed and ‘ethical war’ waged by simply increasing battlefield technology, systematically removing human error/bias. How does an increasingly artificially intelligent battlefield reshape our capacity to think ethically in warfare (Schwartz, 2016)? Traditional ethics of war bases the justness of how one fights in war on the two principles of jus in bello (justice in fighting war) ethics: discrimination and proportionality, weighted against military necessity. Although how these categories apply in various wars has always been contested, the core assumption is that these categories are designed to be an ethics of practical judgment (see Brown, 2010) for decision-makers to weigh potentially grave consequences of civilian casualties against overall war aims. However, the algorithmic construction of terrorists has radically shifted the idea of who qualifies as a combatant in warfare. What then are the ethical implications for researchers and practitioners for a computational ethics of war? (read more...)

From Law in Action to Law in Computation: Preparing PhD Students for Technology, Law and Society

Editor’s Note: This is the inaugural post for the Law in Computation series, a collection of blog posts from faculty and graduate student fellows at UC Irvine’s Technology, Law and Society Institute. Leading up to a summer institute in 2018, the series provides examples of research and thinking from this interdisciplinary group and elaborates how sociolegal scholars might address new computing technologies, like artificial intelligence, blockchain, machine learning, autonomous vehicles, and more.  In 2015, a robot buying illicit items off the “dark web” was confiscated by the Swiss authorities along with its haul of Ecstasy pills, a Hungarian passport, counterfeit designer clothing, and other items. Dubbed Random Darknet Shopper it was a bot programmed to shop on the dark web using Bitcoin, the pseudo-anonymous cryptocurrency that, at the time of my writing, is experiencing an enormous bubble. Previously assumed to be the domain of criminals or drug dealers, the Bitcoin bubble has made it more mainstream, even on popular television shows like The Daily Show and is being discussed at policy forums worldwide. It increased in value from just over $1000 to over $8000 between February 2017 and February 2018, with a peak at over $19,000 in mid-December 2017. While it was pretty obscure just a few months ago, you probably have a cousin or uncle currently “mining” Bitcoin or trading in similar digital tokens whether you know it or not. (read more...)

Automation and Heteromation: The Future (and Present) of Labor

Editor’s note: This is a co-authored post by Bonnie Nardi and Hamid Ekbia. For the last several years, we have tried to understand how digital technology is changing labor. Of all the alleged causes of disruptions and changes in employment and work—immigrants, free trade, and technology—the last one has received the most extensive debate lately. We review the debate briefly and then discuss our research and how it bears on the questions the debate raises. (read more...)

Becoming More Capable

Editor’s Note: This is the second post in the series on Disabling Technologies “We need to exercise the imagination in order to elbow away at the conditions of im/possibility.” Ingunn Moser & John Law (1999, 174) What is it to be capable? How might we elbow away the conditions that limit ability, to become more capable? In this short piece, I take seriously Rebekah’s invitation to account for “different ways of doing, acting, and living in the world.” The anthropological imperative to “take into account difference” and consider how objects “intersect with social worlds, imaginaries and emergent social practices” speaks to my ongoing efforts to engage with the long and troubled relationship between technology and dis/ability. Specifically, it resonates with my work that asks what, if anything, artificial intelligence (AI) might offer the blind and vision impaired.[1] (read more...)

How (Not) to Talk about AI

Most CASTAC readers familiar with science and technology studies (STS) have probably had conversations with friends—especially friends who are scientists or engineers—that go something like this:  Your friend says that artificial intelligence (AI) is on its way, whether we want it or not.  Programs (or robots, take your pick) will be able to do a lot of tasks that, until now, have always needed humans.  You argue that it’s not so simple; that what we’re seeing is as much a triumph of re-arranging the world as it is of technological innovation. From your point of view, a world of ubiquitous software is being created; which draws on contingent, flexible, just-in-time, human labor; with pervasive interfaces between humans and programs that make one available to the other immediately. Your comments almost always get misinterpreted as a statement that the programs themselves are not really intelligent.  Is that what you believe, your friend asks?  How do you explain all those amazing robot videos then?  “No, no,” you admit, “I am not saying there’s no technological innovation, but it’s complicated, you know.”  Sometimes, at this point, it’s best to end the conversation and move on to other matters. (read more...)

Weekly Round-up | February 10th, 2017

This week’s round-up is a bit more focused, with threads on Mars colonization, automation, and artificial intelligence. As always, we also ask you to write or find great stuff for us to share in next week’s round-up: you can send suggestions, advance-fee scams, or Venmo requests to editor@castac.org. (read more...)

Weekly Round-up | January 20th, 2017

Starting today, we’ll be posting a weekly round-up of cool stuff from around the web that the editorial collective thinks might be interesting to readers of the blog: posts from other blogs, news stories, art objects, internet ephemera. If you stumble across anything that you think might fit here, just shoot a link to editor@castac.org. Help break us out of our habitual media itineraries and parochial corners of the internet! (read more...)