Tag: machine learning

Deep Thunder: The Rise of Big Meteorology

Today has been predicted 26 billion times. The same could be said for tomorrow and the foreseeable days to follow. This prodigious divination is the work of just one entity—IBM’s The Weather Company. These 26 billion daily forecasts of IBM likely represent only a small fraction of the models and projections to which any particular day is subjected (the financial and military sectors are equally ambitious prognosticators). Backed by IBM’s computational juggernaut, The Weather Company is burning through terabytes at a brow-furrowing velocity in its effort to fit the world into a forecast. (Listen Now...)

Killer Robots: Algorithmic Warfare and Techno-Ethics

Editor’s Note: This is the fourth post in our Law in Computation series. War is an experiment in catastrophe; yet, policymakers today believe chance can be tamed and ‘ethical war’ waged by simply increasing battlefield technology, systematically removing human error/bias. How does an increasingly artificially intelligent battlefield reshape our capacity to think ethically in warfare (Schwartz, 2016)? Traditional ethics of war bases the justness of how one fights in war on the two principles of jus in bello (justice in fighting war) ethics: discrimination and proportionality, weighted against military necessity. Although how these categories apply in various wars has always been contested, the core assumption is that these categories are designed to be an ethics of practical judgment (see Brown, 2010) for decision-makers to weigh potentially grave consequences of civilian casualties against overall war aims. However, the algorithmic construction of terrorists has radically shifted the idea of who qualifies as a combatant in warfare. What then are the ethical implications for researchers and practitioners for a computational ethics of war? (read more...)

From Law in Action to Law in Computation: Preparing PhD Students for Technology, Law and Society

Editor’s Note: This is the inaugural post for the Law in Computation series, a collection of blog posts from faculty and graduate student fellows at UC Irvine’s Technology, Law and Society Institute. Leading up to a summer institute in 2018, the series provides examples of research and thinking from this interdisciplinary group and elaborates how sociolegal scholars might address new computing technologies, like artificial intelligence, blockchain, machine learning, autonomous vehicles, and more.  In 2015, a robot buying illicit items off the “dark web” was confiscated by the Swiss authorities along with its haul of Ecstasy pills, a Hungarian passport, counterfeit designer clothing, and other items. Dubbed Random Darknet Shopper it was a bot programmed to shop on the dark web using Bitcoin, the pseudo-anonymous cryptocurrency that, at the time of my writing, is experiencing an enormous bubble. Previously assumed to be the domain of criminals or drug dealers, the Bitcoin bubble has made it more mainstream, even on popular television shows like The Daily Show and is being discussed at policy forums worldwide. It increased in value from just over $1000 to over $8000 between February 2017 and February 2018, with a peak at over $19,000 in mid-December 2017. While it was pretty obscure just a few months ago, you probably have a cousin or uncle currently “mining” Bitcoin or trading in similar digital tokens whether you know it or not. (read more...)

High-Tech Hand Work: When humans replace computers, what does it mean for jobs and for technological change?

[Editor’s Note: This post was revised on 1/28/2016 on Ben’s request. See his note below.] Author’s Note: Since its initial publication, I have reframed this post to more fully integrate the argument and data. This revised post reflects these changes. Recent years have brought a resurgence of interest in how the rapid evolution of computer technologies is affecting work. Some have examined how smart machines are replacing manual labor, swallowing up the manufacturing jobs that have driven the growth of China’s economy. Others reveal how algorithms are supplanting knowledge workers. “Big data” and “machine learning” techniques help software engineers create algorithms that make more accurate and less biased judgments than well-trained humans. Software is already doing the work of medical lab technicians and replicating higher-order cognitive functioning, such as detecting human emotions and facial expressions, processing language, and even writing news articles. Technology has long played a role in both eliminating certain types of work and creating new opportunities. Today’s debates often echo those of the past: technophiles believe that “disruption” is a source of social progress, whereas detractors worry that the coming waves of automation will deepen the insecurity and exploitation of workers. Both sides, however, often overlook the surprising ways in which, rather than creating “frictionless” economies, automation can in fact intensify the use of human labor. In the remainder of this piece, I compare an exemplary study of the industrial revolution of the 19th century with a case study from the front lines of the automation revolution that many believe is now underway. In the Victorian era, new machinery did not replace human workers, but in fact often expanded their use. The same was true at a tech startup that I observed, where artificial intelligence was combined with the routinized application of human labor. Both of these cases draw attention to the specific ways in which technology restructures labor markets not only by eliminating jobs, but also by creating new types of work that must keep pace with machinery. (read more...)

2014 in Review: Re-locating the Human

In retrospect, 2014 may appear a pivotal year for technological change. It was the year that “wearable” technologies began shifting from geek gadget to mass-market consumer good (including the announcement of the Apple Watch and the rising popularity of fitness trackers), that smartphone and tablet usage outstripped that of desktop PCs for accessing the Internet, along with concurrent interest in home automation and increasingly viable models for pervasive computing (such as Google’s purchase of smart thermostat Nest), and that computer algorithms, machine learning, and recommendation engines came increasingly to the fore of public awareness and debate (from Apple buying streaming service Beats to the effects of Facebook’s algorithms). Many of these shifts have been playing out world-wide, or at least, in diverse contexts, such as Chinese online retailer Alibaba going public and Xiaomi smartphone maker speedily surpassing most rivals. It also proved to be an exciting year on The CASTAC Blog, where our team of Associate Editors and contributors brought our attention to this rapidly shifting technological landscape, and to pressing questions and debates driving anthropological inquiry into science and technology. In today’s post, I continue my predecessor Patricia Lange’s tradition of reviewing themes and highlights on the blog from the past year. Some of these are topical, and included energy, the environment, and infrastructure, crowdsourcing and the “sharing” economy, wearables, algorithms and the “Internet of Things,” science communication, science’s publics, and citizen science, while others were more conceptual or even experimental—reflections on longterm ethnographic engagement with technology, broader issues of scientific (and ethnographic) authority, technological infrastructures as social infrastructures and tacit knowledges (such as Jenny Cool’s co-chair report), and broadly, how to make anthropological research into science and technology relevant within and beyond academic circles. (read more...)

What’s the Matter with Artificial Intelligence?

In the media these days, Artificial Intelligence (henceforth AI) is making a comeback. Kevin Drum wrote a long piece for Mother Jones about what the rising power of intelligent programs might mean for the political economy of the United States: for jobs, capital-labor relations, and the welfare state. He worries that as computer programs become more intelligent day by day, they will be put to more and more uses by capital, thereby displacing white-collar labor. And while this may benefit both capital and labor in the long run, the transition could be long and difficult, especially for labor, and exacerbate the already-increasing inequality in the United States. (For more in the same vein, here’s Paul Krugman, Moshe Vardi, Noah Smith, and a non-bylined Economist piece.) (read more...)