Archives
The panoptic eye of temperature.

Deep Thunder: The Rise of Big Meteorology

Today has been predicted 26 billion times. The same could be said for tomorrow and the foreseeable days to follow. This prodigious divination is the work of just one entity—IBM’s The Weather Company. These 26 billion daily forecasts of IBM likely represent only a small fraction of the models and projections to which any particular day is subjected (the financial and military sectors are equally ambitious prognosticators). Backed by IBM’s computational juggernaut, The Weather Company is burning through terabytes at a brow-furrowing velocity in its effort to fit the world into a forecast. (Listen Now...)

Top images: Edison Intel board with prototype screen attached; photo of Nightscout Foundation sticker; Bottom image: November 2017 Nightscout Hackathon

The FDA, Patient Empowerment, and the Type 1 Diabetes Communities in the Era of Digital Health

The day-long September 2018 workshop, “Medical Devices-Patient Engagement in Real World Evidence: Lessons Learned and Best Practices,” sponsored by the Food and Drug Administration (FDA) and University of Maryland (UM), met on the Baltimore campus, the city where I spent my graduate school years. In contrast to Baltimore’s palpable desperation, UMB’s health campus gleamed with newness, its brick walkways and tastefully planted vegetation viewable through floor-to-ceiling windows. In the well-appointed auditorium, Dr. Jeffrey Shuren, director of the Center for Devices and Radiological Health (CDRH, pronounced ‘cedar’), closed his introduction to the conference with the pronouncement that as the FDA moved toward real world evidence (RWE), “patient engagement” and the data patients may collect are invaluable for RWE. (read more...)

Image of trees and water at dawn, illustrating the "Dawn of Digital Therapeutics"

The Dawn of Digital Therapeutics

A techno-optimistic attitude tells us we’re living at an inflexion point where care practices are being transformed by technology. Monitoring and attending to health and well-being are no longer activities bound within physical spaces like hospitals and clinics; these activities have extended to the basic functions of smart phones. A new labor force has emerged for this digitized health transformation utilizing open source engineering platforms, structuring work into two-week Agile design sprints, and leveraging professionals from traditional healthcare settings. In many ways, the practices of these workers appear synonymous to those of other start-up companies across industry spaces. Throughout ethnographic fieldwork over the last year, I have explored the evolution of this phenomenon within an emergent area of the digital health sphere: Digital therapeutics. (read more...)

Picture of Queer Culture Festival Staff, police, protesters, and press documenting the police lift and carry anti-LGBT protesters from the ground.

The Surveillance Cyborg

Editor’s Note: This post is part of our ongoing series, “Queering Surveillance.”  Surveillance is an embodied experience, both being watched and watching. The sheer number of concert-goers recording Cher’s “Here We Go Again” concert this past year with their phones had them trade singing and dancing for an act of documentation. Whether the recordings are to remember the experience later, share the experience with others, or to simply document one’s presence in that space and at that time, recording the concert on one’s phone becomes an experience in its own right. They are present in the space, but their attention is about both what is happening in the here and now and the recording that filters the experience in the future. Their phones and recordings are central to their embodied experience, fused into one like a cyborg traveling across space and time in the moment. Add to this that countless concert-goers are recording the same concert from their individuated perspective, and thus the concert becomes infinite and virtual—of course, the way Cher was always meant it to be. (read more...)

A table with a light colored table cloth shows two examples of biomimicry. In one example, we can see a tablet screen nestled among natural detritus. In the other, we can see a green sapling growing out of a white structure.

The Nature of the Copy

From the dead center of an all-white eye, a lone sapling rose two feet tall. Cyclical ridges and valleys, etched in bioplastic by an unseen watchmaker, encircled the solitary lifeform and separated it from the mottled, decaying plant matter that had been strewn about nearby with intention, detritus by design. Lying adjacent on the table-in-sylvan-drag, a digital tablet and paper pamphlets displayed the word Nucleário. Nucleário and the five other prototypes exhibited at the 2018 Biomimicry Launchpad Showcase in Berkeley, California, were, according to the event’s online marketing, projects from a “new species of entrepreneur” who practices “biomimicry,” the “conscious emulation of life’s genius,” a refrain I would hear repeatedly during my fieldwork on contemporary chimeras of biology and design. “Genius,” a cultural category once reserved for the presence of spiritual inspiration, here refers to the technical creativity of a re-animated nature that designers attempt to imitate in new devices like Nucleário. Under the solar-paneled roof of the David Brower Center, whose eponym served as the first executive director of the Sierra Club, teams from Brazil, Mexico, Colombia, Taiwan, and the United States had gathered to compete under the Biomimicry Global Design Challenge, which, this year, prompted designers to devise solutions for the mitigation and reversal of climate change. The prize: a cash award of $100,000 given by the Biomimicry Institute, a Montana-based nonprofit organization dedicated to “building a new generation of sustainability innovators” through educational initiatives. (read more...)

An black Amazon Echo is adorned with a bright green speech bubble saying "Let me get that for you."

Dumbwaiters and Smartphones: The Responsibility of Intelligence

“I don’t have to drink alone,” she paused for comedic effect, “now that I have Alexa.” Thus was the punchline of a story told by a widowed octogenarian at a recent wedding. Alexa is a mass-produced personality that can play music, suggest items for purchase, monitor consumption and health habits, or, like any good friend, just listen. While all these tasks could be performed in silence with various algorithmic appliances, Alexa and her cousins from Google and Apple are imbued with a perceived autonomy directly stemming from their capacity for vocalization. Speech, it seems, beckons the liberation of abiotic materials from their machinic programming. (read more...)

An oil painting of a Dutch colonial ship being tossed around by rough seas under a stormy sky.

Producing the Anthropocene, Producing the Future/Water Futures

Editor’s note: Today we have the final installment of our “Anthropocene Melbourne Campus” series, featuring two related posts by Lauren Rickards and Ruth Morgan. Producing the Anthropocene, Producing the Future Lauren Rickards, RMIT University Images of the future are increasingly cast on the widescreen of the Anthropocene: the planetary-scale shift from the comfy Holocene to an unknown and threatening new ‘operating space’ for the Earth. How humanity inadvertently shifted the whole planet so radically and in such a self-damaging manner is now the subject of intense debate. Different narratives of blame locate relative responsibility with various sectors, activities and groups. Common candidates include farming, colonial plantations, industrialization and urbanisation, and the post-war acceleration in consumption and pollution. From a material perspective, there is a strong geological rationale for naming each as a major source of planetary-scale environmental and social impacts and “terraforming.” Indeed, this is how these various proposed starting dates for the Anthropocene have been identified: through the pursuit of widespread and sharp enough changes in the geological record to count as what geologists call a “Golden Spike”, the prerequisite for declaring  a new epoch. Yet this search for the physical origins of the Anthropocene in the historical record needs to extend far past physical signals and their proximate causes to the visions, goals and assumptions underlying the activities involved, including what Ian Hacking would call styles of reasoning. Reading the Anthropocene in this light reveals many limitations within the outlooks, ideas and values that informed the activities mentioned above, including an often willful ignorance of the immediate impacts on people, nonhumans and the abiotic environment, as well as the “unknown unknown” of the long-term, accumulative changes being wrought. (read more...)

Andy Serkis in a motion capture suit alongside the CG character Gollum.

Out-of-Body Workspaces: Andy Serkis and Motion Capture Technologies

Over the last two decades, the entertainment industry has experienced a turn to what Lucy Suchman termed virtualization technologies in film and videogame production (Suchman 2016). In addition, production studies scholars have described authorship as linked to control and ownership, sharpening distinctions between “creative” and “technical” work, a divide with significant economic repercussions (Caldwell 2008).  These ideas are useful in understanding film studio workspaces, where visual effects (VFX) workers and actors collaborate in creating believable virtual characters, using three-dimensional (3D) modeling software and motion-capture (mo-cap) systems to capture the attributes and movements of human bodies and transfer them to digital models.  Once captured, digital performances become data, to be manipulated and merged seamlessly with those of live actors and environments in the final film. The introduction of virtualization technologies and computer graphics tools have surfaced tensions over creative control, authorship, and labor. British actor Andy Serkis has been a high-profile apologist for the human actor’s central role in bringing virtual characters to life for film.  Serkis, who Rolling Stone called “the king of post-human acting,” is known for using motion capture (mo-cap) to breathe life into digitally-created, non-human characters. His notable performances include the creature Gollum in the Lord of the Rings trilogy (2001-2003), the ape Cesar in Rise of the Planet of the Apes (2011), as well as  Supreme Leader Snoke in Star Wars: The Force Awakens (2015), and work on several characters in the 2018 Mowgli: Legend of the Jungle, which he also directed. While Serkis’ performances have made him highly visible to audiences, digital labor historians have begun documenting the often-invisible film workers creating 3D models and VFX (Curtin and Sanson, 2017). The tensions between mo-cap performers and VFX workers reveal the contours of an emerging hybrid workspace that combines actors’ physical bodies and movements with VFX workers’ manipulations of digital geometry and data. (read more...)