Category: Series

How to Book an Appointment Online when you have Aphasia

I’m meeting a fellow speech therapist researcher at a weekly drop-in session for people with aphasia when Markus* comes in, brandishing an envelope.  “I went!” he exclaims. Markus has just arrived fresh from a visit to the head office of one of his home utility providers. He has taken matters into his own hands after coming up against a technological obstacle.  Markus regaled to us his story using an effective combination of short spoken utterances, gesture, a written note and an established communication dynamic with my fellow speech therapist.  I want to share with you his story to discuss the issue of technology and aphasia. Markus had received a letter telling him that his boiler (the British term for a home water-heating system) needed to be serviced.  The letter instructed him to call or go online to make an appointment.  Due to his aphasia, however, Markus had found himself unable (read more...)

Is Uncertainty a Useful Concept? Tracking Environmental Damage in the Lao Hydropower Industry

The collapse last week of a major hydropower dam in southern Laos, the Xe Pian-Xe Namnoy, as a tropical storm dumped an unknown, but massive, volume of water into its reservoir, seems to have prompted at least a little soul-searching for a country that considers itself ‘the Battery of Southeast Asia.’ It’s not very often that large dams collapse, but it’s the second time it’s happened this year in Laos (the prior one was much smaller), and some readers may have been affected by the near-collapse of the Oroville Dam—the tallest dam in the United States—in central California in 2017, prompting the evacuation of 180,000 people. Laos has far lower population density—about 10,000 people have been affected by the still under-construction dam—and as of the time of writing there are perhaps a dozen dead and several hundred missing. But a dam doesn’t have to collapse for it to be a disaster. Even when dams work well, in the best case scenarios they produce a tremendous degree of uncertainty for the people they affect about what might happen and what comes next.  (read more...)

The Power of Small Things: Trustmarkers and Designing for Mental Health

At my office we put tennis balls on the legs of the chairs to reduce the noise of the scraping chairs against the parquet floors. They are hard to miss, but they fulfill their purpose. For this reason, I never reflected on what kind of feelings these bright fluorescent yellow balls might evoke when visitors see them attached to the bottom of the meeting room’s chair legs. (read more...)

Regulating Physical Places with Digital Code

Editor’s Note: This is the seventh and final post in our Law in Computation series. At first, I was perplexed by the K5 by Knightscope, a “fully autonomous security data machine,” rolling through the Irvine Spectrum Shopping Center last summer. Now, I am not cavalier, nor naive, about my rights to privacy, confidentiality, and anonymity, but I fully accept that I will be captured by surveillance cameras from my arrival to departure in many private places. After all, there is a strong market demand for surveillance technologies, and the market has long existed with little regulations from statutory or case law; their use continues to expand as the cost of sensors and data processing decreases. (read more...)

Rule of Law by Machine? Not so Fast!

Editor’s Note: This is the sixth post in our Law in Computation series. Back in the mid-1990s when I was a graduate student, I “interned” at a parole office as part of my methods training in field research. In my first week, another intern—an undergraduate administration of justice student from a local college—trained me in how to complete pre-release reports for those men and women coming out of prison and entering onto parole supervision. The pre-release report was largely centered on a numeric evaluation of the future parolee’s risks and needs. The instrument used by the parole office was relatively crude, but it exemplified a trend in criminal justice that pits numbers-based tools, designed to predict and categorize system-involved subjects, against more intuitive judgments of legal actors in the system. (read more...)

Data Doppelgängers and Issues of Consent

Editor’s Note: This is the fifth post in our Law in Computation series. In February 2018, journalist Kashmir Hill wrote about her collaboration with researcher Surya Mattu to make her (Hill’s) home as “smart” as possible. They wanted to see what they could learn about privacy, both from the perspective of living in such a house and from the ‘data fumes’ or ‘data exhaust’ of all these smart appliances themselves. Data fumes or exhaust refer to the traces we leave behind when we interact digitally but also, often, information that we provide to sign-up on digital platforms (gender, location, relationships etc). These traces, when aligned and collated with our daily digital behaviours on social media, e-commerce and Search platforms, are vital to the speculative and dynamic constructions of who we might be. (read more...)

Our Digital Selves: What we learn about ability from avatars

Editor’s Note: This post was written by Donna Davis, PhD – University of Oregon and is the sixth post in the series on Disabling Technologies Imagine sitting on the beach on a beautiful day. The sun is rising and the birds singing. Wisps of clouds gently float by as the surf rhythmically rolls in and out at your feet and the children frolic in the sand. You can almost feel the heat of the sun, only you can’t — because you’re sitting in a virtual world. Such is the experience of the childless agoraphobe who may never see the ocean again. Virtual worlds have always been places of both escape and entertainment. For people with disabilities, this notion of escape comes with far greater opportunity but also risk. The risk is that this escape is tied to a simplistic understanding of both virtual reality and disability – especially where people who have never experienced either assume an individual with disabilities may want to abandon their physical experience for the comfort of a virtual one. (read more...)

Killer Robots: Algorithmic Warfare and Techno-Ethics

Editor’s Note: This is the fourth post in our Law in Computation series. War is an experiment in catastrophe; yet, policymakers today believe chance can be tamed and ‘ethical war’ waged by simply increasing battlefield technology, systematically removing human error/bias. How does an increasingly artificially intelligent battlefield reshape our capacity to think ethically in warfare (Schwartz, 2016)? Traditional ethics of war bases the justness of how one fights in war on the two principles of jus in bello (justice in fighting war) ethics: discrimination and proportionality, weighted against military necessity. Although how these categories apply in various wars has always been contested, the core assumption is that these categories are designed to be an ethics of practical judgment (see Brown, 2010) for decision-makers to weigh potentially grave consequences of civilian casualties against overall war aims. However, the algorithmic construction of terrorists has radically shifted the idea of who qualifies as a combatant in warfare. What then are the ethical implications for researchers and practitioners for a computational ethics of war? (read more...)