Distraction Free Reading

The Brilliant Future of AI

On a hot August afternoon in 2018, I attended a public lecture on AI and the future of work, broadly defined. Back then, I was a student in Brazil conducting my fieldwork for my master’s thesis in anthropology, and interested in understanding artificial intelligence (AI) representations in Brazilian media. As such, I conducted ethnographic fieldwork at events and talks in Computer Science and other departments at my university (Universidade Federal de Goiás, UFG) and carried out archival research of media stories about AI written in Portuguese. In this post, I retell a field story to reflect on what I noticed has changed when it comes to discussions around AI and the future of work in popular media since then. 

The guest speaker for that 2018 lecture was an Israeli philosopher who was traveling through Brazil and presenting his research at different universities in the country. He painted a grim picture to a packed auditorium and presented several graphs, tables, and forecasts about the future that showed how machines would overtake humans. All that information made me and the public a little scared. During the Q&A, students and professors, many of them from the humanities, asked questions about the future and were eager to hear what seemed to me scary stories about how, in the near future, human labor would be replaced by intelligent machines. “What should we do if that was the case?” some of them asked. 

Like many of my fellow audience members, I was concerned when confronted with what the professor presented, especially since he emphasized the need to teach economics students computer programming, machine learning techniques, and mathematical subjects beyond the typical curriculum. In my notebook, I jotted down, “This talk reminds me of Klaus Schawb’s ideas,” and that’s exactly what happened. Subsequently, the speaker mentioned the book, The Fourth Industrial Revolution, published in 2016 by Schwab,[1] the founder of the World Economic Forum. I had read that book months before because the title caught my attention as I browsed a bookstore. Halfway through the book, Schawb presents a table with a list of professions that will disappear due to the impact of AI on the workforce. The speaker shared that same table minutes later. 

At that moment, I could not help thinking about my future and those of my colleagues in anthropology, as some of us used to joke that we hardly remembered the statistics classes we had taken during undergrad. I did not know that I would have to take even more statistics classes during my Ph.D., but the astonishment and fear about that particular future was not just my privilege to feel. In the Q&A session after the talk, most people asked questions on how to prepare for the technological revolution that would transform the future of work in their respective fields. The professor answered most of the questions with responses that highlighted the need for everyone to learn a programming language. I thought to myself, “Oh, no, I hope that is not right.”

Jump to 2024, years after that event in the Department of Economics at my Brazilian university, and AI is no longer a novel topic. The acronym permeates every bit of our existence, and its uses regulate various aspects of our lives. Kate Crawford (2021) writes, “Artificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction” and that “the design and deployment of AI systems are embedded in social, political, cultural, and economic worlds, shaped by humans and different institutions.” Works advancing feminist objectivity based on partial perspective have questioned systems of classifications and enumeration of patterns found in large, varying datasets (Rezai 2022; Costanza-Chock 2020; D’ignazio and Klein 2020), speaking to longstanding anthropological observations about how cultures of numeracy and computing reinforce privilege and power by claiming the “authority” and “objectivity” of numbers, and highlighting how categories created by computer scientists are an invention based on an unreliable ground (Hacking 1990).

Still, it is fascinating that an attitude similar to the one I observed in 2018 continues to resonate over time, and fears about the future of AI abound. In 2024, my students and the administration at my university are very concerned about the impact of AI on our fields and, more importantly, the university’s finances. Not to say anything about professors and even students themselves being desperate, with a lack of clear guidelines on how to incorporate AI in the classroom or not. I honestly expected the future to look differently, less US-centric, with a larger participation of Global South developers in the design and decision-making processes of AI technologies.

Interestingly enough, at the most recent World Economic Forum held in Davos, Switzerland, Sam Altman, CEO of OpenAI, the company behind ChatGPT, shared that artificial general intelligence would not have the impact economists and tech CEOs once thought it would. Paris Marx (2024) wrote about how this new discourse differs from last year’s. A year ago, Altman was preaching the AI gospel, overpromising the positive impacts of AI for the future of humanity or of those humans living in the Global North, at least.

In 2024, universities seem to have an AI FOMO, as noted by Emily Bender from the newsletter Mystery AI Hype Theater 3000 (2024). In response, higher education institutions across the US have announced initiatives that will make them the next “AI university.” Bogost (2024) has also written about how universities in the US seem to have a computer science problem, meaning that more and more universities are opening independent computer science colleges. One of the issues with this, as raised by Bogost (2024), is that  without connecting to other sciences, “computer scientists can forget that computers are supposed to be tools that help people.” Such critical conversations seem to be missing in higher education institutions in the US. Is the AI industry entering one of its winters, as I wrote here? Anthropologists have been accused of being pessimists and luddites all along the way for just asking questions about AI. But we (“we” is working hard here) care about humans, and we care about the planet. In 2024, the conversation about AI should be more focused on the present than on a distant future that’s always years away.

There is no doubt AI will impact the future of work. It already impacts the present and the work of people around the world, such as data labelers, gig workers, and even those working at semiconductor manufacturing companies. But what is the nature of this impact? 

A new story by Rebecca Jennings (2024) highlights the intersections of digital technologies and the precarization of work. She eloquently explains that, “A society made up of human beings who have turned themselves into small businesses is basically the logical endpoint of free market capitalism, anyway.” Gig workers and even Hollywood actors have also criticized AI adoption in their industry. Regardless of the current and future impacts of AI, workers should be treated equitably. In the end, “This (mis)recognition of what technology does and could do, the benefit of the doubt and the doubtful benefits, is so often a crucial part of getting technoscience off the ground” (Hong 2020:15). 

With the information that I now know, were I to go back to that event in 2018 or attend the World Economic Forum, I remember that Sara Ahmed encourages us to ask ethical questions. I ask myself and you: What are some ethical questions we, as anthropologists, can and should be asking about AI? And how does that impact how we act in the world?

Notes

[1] Klaus Martin Schwab is a German mechanical engineer, economist, and founder of the World Economic Forum. [Read a complete bio on this website: https://www.weforum.org/about/klaus-schwab/].

Acknowledgements

With many thanks to aman agah, Jessica Caporusso, Soojin Kim, and Cydney Seigerman.


References

Ahmed, S. (2017). Living a Feminist Life. Duke University Press.

Bogost, I. (2024, March 22). Universities have a computer-science problem. The Atlantic. https://www.theatlantic.com/technology/archive/2024/03/computing-college-cs-majors/677792/?utm_campaign=later-linkinbio-theatlantic&utm_content=later-42009251&utm_medium=social&utm_source=linkin.bio 

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

D’ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.

Hacking, I. (1990). The taming of chance (No. 17). Cambridge University Press.

Hong, S. H. (2020). Technologies of speculation. New York University Press.

Jennings, R. (2024, February 1). Everyone’s a sellout now. Vox. https://www.vox.com/culture/2024/2/1/24056883/tiktok-self-promotion-artist-career-how-to-build-following 

Marx, P. (2024, January 24). Sam Altman’s self-serving vision of the future. Disconnect. https://disconnect.blog/sam-altmans-self-serving-vision-of-the-future/?ref=disconnect-newsletter 

Mystery AI Hype Theater 3000: The Newsletter. (2024, April 11). More collegiate fomo. RSS. https://buttondown.email/maiht3k/archive/more-collegiate-fomo/ 

Nunes, A. C. A. (2023, September 5). AI as a feminist issue. Platypus. https://blog.castac.org/2023/09/ai-as-a-feminist-issue/ 

Rezai, Y. (2022). Data stories for/from all: why data feminism is for everyone. DHQ, Digital Humanities Quarterly, (16)2. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *