Editor’s Note: This is the fifth post in our Law in Computation series.
In February 2018, journalist Kashmir Hill wrote about her collaboration with researcher Surya Mattu to make her (Hill’s) home as “smart” as possible. They wanted to see what they could learn about privacy, both from the perspective of living in such a house and from the ‘data fumes’ or ‘data exhaust’ of all these smart appliances themselves. Data fumes or exhaust refer to the traces we leave behind when we interact digitally but also, often, information that we provide to sign-up on digital platforms (gender, location, relationships etc). These traces, when aligned and collated with our daily digital behaviours on social media, e-commerce and Search platforms, are vital to the speculative and dynamic constructions of who we might be.
As Hill and Mattu narrate, in contrast to the stories of seamless, quiet, and frictionless sentient homes, setting up various smart devices in Hill’s 1970s home was not only cumbersome (it took them two hours to get the smart Christmas lights working), but also, with all the extra power strips and power consumption, Hill worried that her smart house might burst in flames. But, in terms of the data exhaust these devices produced, as Mattu notes, even though many of the devices were encrypted, a lot of real-time information was constantly produced in the form of metadata that could already tell companies, data-brokers (or, for that matter, anyone interested in your personal life) exactly how one is living, when one might have walked past a sensor and, how one might have unintentionally leaked similar information on other people, such as family members moving in the same space.
So, whither privacy in the age of data exhaust? What new kinds of digital traces are produced in our interactions with new technological forms, and how do they challenge how we understand and protect our privacy? Importantly, where do our current processes of consent-taking stand vis-à-vis data exhausts? What role does informed consent play in data-making and -taking, apart from obviously informing and enrolling users?
My own research is on datafication and subjectivities in ridesharing transactions, but I am also more broadly curious about systems that produce excessive and often unmanageable data traces. I find the traces we leave behind, the new visibilities they produce, and finally, how they interact with privacy as a discourse of geopolitics and personhood, fascinating. For instance, tracking the live location of your Uber driver and monitoring the route they took, browsing and buying things in a certain order, or walking past a sensor-activated object at different times of the day – all of these are traces that are collected, aggregated, analyzed, and constantly sold back to technology companies that claim to use them to make “smart” systems. All these traces cannot be captured by existing consent-taking frameworks (such as websites asking for “cookie consent”). Critical software studies, data studies, and science and technology studies have been attending to these changing forms, modalities and affordances of technologies, asking how they might change the way we perceive and maintain our social relationships. Drawing on TLS fellow Evan Conaway’s post on servers, I join the call for sociolegal scholarship to seriously engage with work that dwells on the affordances and characteristics of new computational forms (databases, algorithms, IoT). Changes in these forms of technology have consequences for how we even imagine data-sovereignty, privacy rights and more.
The smart home example my own work on ridesharing labor and datafication in the Global South may not seem to be in direct dialog, but they converge in drawing our attention to the other figure of privacy regimes, the life that emerges beyond our immediate interactions, interactions into which we enter at least somewhat consensually, knowing to some extent the risks and implications of our participation. The other life is that of the ‘data doppelgänger’, a dynamic yet representative data-persona that is constituted based on our digital traces and is constantly plugged into making “intelligent” platforms smarter and more extractive.
The scariest part is that these doppelgängers can be constructed almost entirely without having to ask the subject, who only felt themselves to be entering into contextual, situated agreements, to be databased. People often do not realize that their interactions with technology produce an enduring, evolving, databased representation of themselves. As some have already pointed out in light of Facebook’s recent Cambridge Analytica scandal, while Terms of Service remain sketchy and often scary for those who care to read them, the leakage of privacy or potential risks and harms of datafication (even for supposedly anonymized data) need to be reimagined to include these less visible traces we leave behind. Relying on data doppelgängers to serve customers with personalized ads or suggestions may sound relatively innocuous, but being able to bypass consent has dangerous implications for democracy, as we now know. The move towards general data protection regulations is a step in the right direction, but the extent to which new informational forms can be captured, regulated, and enrolled into processes of consent remains to be seen. In this regard, input from disciplines like information science and science and technology studies is crucial for expanding the imaginative limits, and practical impact, of sociolegal scholarship on these new technological forms.
As an Informatics/STS scholar who studies ridesharing platforms and the remaking of urban space and labor formations in India, I did not see myself as actively in conversation with socioloegal scholarship until the Technology, Law and Society (TLS) Fellowship happened. Being in conversation with other fellows, as well as the project leaders Bill Maurer and Mona Lynch, has been illuminating: I now see platform-work interactions and new processes of infrastructuring (such as the making of the smart home, the making of smart cities, and the new “data revolution” of development projects in the Global South) as crucial techno-cultural and political discourses that are informing and shaping regulation globally in very specific ways. In turn, these regulatory frameworks can themselves be read as historical and geopolitical assemblages. In my future work as a fellow, building on the issues raised by another TLS fellow’s post, I hope to turn to the constellations of contextual and distributed privacy regimes, exploring how and why privacy seems to always take a back seat in data-driven humanitarian development.