Tag: algorithms

Anti-Queer Violence, Bearing Witness, and Thinking with Algorithms on Social Media

In early June 2019, news began to break concerning the death of a Salvadoran transgender woman, Johana Medina León, of pneumonia, four days after being released from nearly six weeks in ICE custody. Before long, my Facebook feed was filled with stories detailing the persecution Johana faced in El Salvador because of her gender identity; her dangerous journey to the United States to seek asylum; and her final moments as she struggled to save her own life, as it became clear no one else would. She might have saved her own life, if she’d been given the resources. In El Salvador, Johana was a nurse. Johana’s death is tragic for many reasons, not the least of which is that had it not been for social media, it likely would have gone unnoticed. (read more...)

Clinical Data in the U.S. Department of Veterans Affairs: Ethnographic Engagements

By: Peter Taber, Nicholas Rattray, Lauren Penney, Megan McCullough and Samantha Gottlieb This post emerged from a 2018 Society for Applied Anthropology panel on anthropological engagements with health data in the U.S. Department of Veterans Affairs (VA). Serving over 9 million enrollees with a current federal budget of USD68 billion, the VA is an important testing site for digital healthcare infrastructure, as it has been for several decades. The panel brought our VA research and quality improvement (QI) efforts targeting the electronic health record (EHR) and other digital infrastructure into dialog with existing work on the social lives of data and algorithms, as well as the broader concerns of medical anthropology and STS in an era of the “datafication of health” (Ruckenstein and Schüll 2017). Extracts from our conversation, presented below, are taken from a follow-up video call exploring these issues. (read more...)

Dumbwaiters and Smartphones: The Responsibility of Intelligence

“I don’t have to drink alone,” she paused for comedic effect, “now that I have Alexa.” Thus was the punchline of a story told by a widowed octogenarian at a recent wedding. Alexa is a mass-produced personality that can play music, suggest items for purchase, monitor consumption and health habits, or, like any good friend, just listen. While all these tasks could be performed in silence with various algorithmic appliances, Alexa and her cousins from Google and Apple are imbued with a perceived autonomy directly stemming from their capacity for vocalization. Speech, it seems, beckons the liberation of abiotic materials from their machinic programming. (read more...)

Out-of-Body Workspaces: Andy Serkis and Motion Capture Technologies

Over the last two decades, the entertainment industry has experienced a turn to what Lucy Suchman termed virtualization technologies in film and videogame production (Suchman 2016). In addition, production studies scholars have described authorship as linked to control and ownership, sharpening distinctions between “creative” and “technical” work, a divide with significant economic repercussions (Caldwell 2008).  These ideas are useful in understanding film studio workspaces, where visual effects (VFX) workers and actors collaborate in creating believable virtual characters, using three-dimensional (3D) modeling software and motion-capture (mo-cap) systems to capture the attributes and movements of human bodies and transfer them to digital models.  Once captured, digital performances become data, to be manipulated and merged seamlessly with those of live actors and environments in the final film. The introduction of virtualization technologies and computer graphics tools have surfaced tensions over creative control, authorship, and labor. British actor Andy Serkis has been a high-profile apologist for the human actor’s central role in bringing virtual characters to life for film.  Serkis, who Rolling Stone called “the king of post-human acting,” is known for using motion capture (mo-cap) to breathe life into digitally-created, non-human characters. His notable performances include the creature Gollum in the Lord of the Rings trilogy (2001-2003), the ape Cesar in Rise of the Planet of the Apes (2011), as well as  Supreme Leader Snoke in Star Wars: The Force Awakens (2015), and work on several characters in the 2018 Mowgli: Legend of the Jungle, which he also directed. While Serkis’ performances have made him highly visible to audiences, digital labor historians have begun documenting the often-invisible film workers creating 3D models and VFX (Curtin and Sanson, 2017). The tensions between mo-cap performers and VFX workers reveal the contours of an emerging hybrid workspace that combines actors’ physical bodies and movements with VFX workers’ manipulations of digital geometry and data. (read more...)

Critiquing Big Data in China and Beyond

“I do think that the Internet truly makes us feel the world can become a smaller place,” an interlocutor, whom I will call Bo, told me in his parents’ home in Shijiazhuang, a city in China’s Hebei Province. It was late 2014, and he was studying to become a filmmaker in Beijing. During our conversation, he told me about discovering Google Earth when he was younger, recalling how, suddenly, he could “see any place in the world” from the comfort of his home. He could zoom in to explore a mountain village in Iceland, a house, and even a village dog, feeling that, without Google Earth, he would never have been able to visit such faraway places. The experience might have been virtual (xuni), he mused, but it had also been real (zhenshi). His account expressed a kind of enthusiasm for the digital that I often encountered during my ethnographic fieldwork on digital opportunity in China. However, his story was made especially compelling by the oppressive smog plaguing the city outside. While neighboring buildings disappeared in a toxic fog, he expressed his excitement about “seeing” a digitally mediated “Google Earth.” (read more...)

Rule of Law by Machine? Not so Fast!

Editor’s Note: This is the sixth post in our Law in Computation series. Back in the mid-1990s when I was a graduate student, I “interned” at a parole office as part of my methods training in field research. In my first week, another intern—an undergraduate administration of justice student from a local college—trained me in how to complete pre-release reports for those men and women coming out of prison and entering onto parole supervision. The pre-release report was largely centered on a numeric evaluation of the future parolee’s risks and needs. The instrument used by the parole office was relatively crude, but it exemplified a trend in criminal justice that pits numbers-based tools, designed to predict and categorize system-involved subjects, against more intuitive judgments of legal actors in the system. (read more...)

Killer Robots: Algorithmic Warfare and Techno-Ethics

Editor’s Note: This is the fourth post in our Law in Computation series. War is an experiment in catastrophe; yet, policymakers today believe chance can be tamed and ‘ethical war’ waged by simply increasing battlefield technology, systematically removing human error/bias. How does an increasingly artificially intelligent battlefield reshape our capacity to think ethically in warfare (Schwartz, 2016)? Traditional ethics of war bases the justness of how one fights in war on the two principles of jus in bello (justice in fighting war) ethics: discrimination and proportionality, weighted against military necessity. Although how these categories apply in various wars has always been contested, the core assumption is that these categories are designed to be an ethics of practical judgment (see Brown, 2010) for decision-makers to weigh potentially grave consequences of civilian casualties against overall war aims. However, the algorithmic construction of terrorists has radically shifted the idea of who qualifies as a combatant in warfare. What then are the ethical implications for researchers and practitioners for a computational ethics of war? (read more...)

From Law in Action to Law in Computation: Preparing PhD Students for Technology, Law and Society

Editor’s Note: This is the inaugural post for the Law in Computation series, a collection of blog posts from faculty and graduate student fellows at UC Irvine’s Technology, Law and Society Institute. Leading up to a summer institute in 2018, the series provides examples of research and thinking from this interdisciplinary group and elaborates how sociolegal scholars might address new computing technologies, like artificial intelligence, blockchain, machine learning, autonomous vehicles, and more.  In 2015, a robot buying illicit items off the “dark web” was confiscated by the Swiss authorities along with its haul of Ecstasy pills, a Hungarian passport, counterfeit designer clothing, and other items. Dubbed Random Darknet Shopper it was a bot programmed to shop on the dark web using Bitcoin, the pseudo-anonymous cryptocurrency that, at the time of my writing, is experiencing an enormous bubble. Previously assumed to be the domain of criminals or drug dealers, the Bitcoin bubble has made it more mainstream, even on popular television shows like The Daily Show and is being discussed at policy forums worldwide. It increased in value from just over $1000 to over $8000 between February 2017 and February 2018, with a peak at over $19,000 in mid-December 2017. While it was pretty obscure just a few months ago, you probably have a cousin or uncle currently “mining” Bitcoin or trading in similar digital tokens whether you know it or not. (read more...)