Distraction Free Reading

Making Companion Species at a Robotics Lab

I spent many a warm summer day holed up inside a robotics laboratory, analyzing various datasets for my Human-Robot Interaction (HRI) research project. The room was often dark and the windows were small. My desk, located in the left corner furthest from the entrance, rarely received any sunlight. On several days, the lab would be empty. I’d be left with nothing but the company of browning tube lights, dangling cables and wires, and robots used in the lab’s Human-Computer Interaction (HCI) experiments.

A person holding the head of a robot while another person fixes a robot in the background.

The robot Nao, with its head detached from its body. Nao was used in many of the lab’s experiments. Photo by author.

During these lonely days, robots became my only source of companionship. I started taking selfies with them and they appeared in my daily BeReal posts: in one post, I’m pictured with Nao, a glossy, white two-foot-tall humanoid robot, and we’re almost kissing; in another, I’m smiling while the camera zooms in on Keepon, a yellow creature-like robot reminiscent of a baby chick. For the lab’s researchers like myself, this experience is far from unique. “This robot is my bestie,” another student working in the lab would also say. These robots’ functions transcended experimentation: for their programmers, they were social companions too. While not living, our interactions with robots often project agency and autonomy to them.

What else might be said about the relationship between humans and robots like these? Donna Haraway, a feminist philosopher of science, writes of this in The Companion Species Manifesto. Haraway (2020) introduces dogs as “companion species” because they share in our stories and histories. Haraway narrates stories of dog-human relationships, and through this, she reveals “that all ethical relating, within or between species, is knit from the silk-strong thread of ongoing alertness to otherness-in-relation” (2020, 50). These mechanics of co-construction are not limited to relations between humans and dogs—she transfers this reasoning to cyborgs: “Clearly, cyborgs—with their historical congealings of the machinic and the organic in the codes of information…—fit within the taxon of companion species” (2020, 21). Just as Haraway depicts cyborgs as companion species, we can also think of robots similarly—precisely because robots, like dogs and cyborgs, share in our stories. They occupy humankind’s visions of the future and are often designed in humankind’s image. And if storytelling is central to the dog’s becoming of “companion species,” what role does storytelling play in robotics research? How do humans and robots occupy one another’s stories?

Robots as Companion Species

In science fiction, human realities shape the stories told about robots. For instance, in the film Ex Machina, the transformation of Ava from sex robot to killer robot mirrors twenty-first century “anxieties about the loss of human ‘mastery over the world’” (Vora and Atanasoski 2020). The Japanese Astro Boy—an optimistic vision of robot future—is a product of Japan’s post-WWII recovery (Lobel 2023).

Haraway also proposes that “art and engineering are natural sibling practices for engaging companion species” (2020, 22). Art and engineering act as “sibling practices” when robot fictions impact robotics design. The American popular imagination of robots as “agents of annihilation” in Terminator “influenced American researchers to carefully manage (and often minimize) the human qualities of robots in order to avoid evoking these destructive qualities” (Richardson 2016, 111). Often, social robots are even designed to be child-like, with large, animated eyes, also influencing roboticists to interact with these technologies as caretakers. Conversely, the optimistic visions in Astro Boy’s have led to increased acceptance of robots for home use in Japan (Lobel 2023). This interaction between fiction and design demonstrates how humans and robots co-habit one another’s histories and one another’s stories.

For companion species, Haraway (2020) says “the relation” is the “smallest possible unit of analysis.” Studying human and robot relations, Vora and Atanasoski’s (2020) analysis of the sex robot argues that while they may not be “biologically reproductive,” they are “socially reproductive,” perpetuating “social norms of desire that support the household economy” and heteronormativity. Similarly, Guevarra (2015) documents how Engkey—an educational robot in South Korea whose instruction, performed by remote labor in the Philippines—reinforces “the neoliberal discursive and political economic strategy for positioning Filipinos as the ideal global labor” (2015, 141). These critical perspectives towards robotics design uncover robots’ role within human matrices of power. While Engkey and the sex robot have little to say about gender or economics, their design symbolically legitimizes heteropatriarchy and the neoliberal order.

In short, the process I illustrate here is that humans create stories about robots, robots are designed with human stories in mind, and robot designs recreate human social, political, economic relations and realities. In turn, humans and robots—partners “in flesh and sign”—might be understood as companion species. How does this manifest in robotics research?

Narrative and Companion Species-making in Robotics Research

Hannah, a researcher I collaborated with, seeks to “understand robots that make social mistakes.” For example, “[if you] ask for blueberry oatmeal and then the robot grabs strawberries and puts them in your oatmeal without asking—that’s a social mistake … These mistakes can happen because developers might not plan for it.”

The ‘social mistakes’ Hannah describes are distinctly low-risk, technical glitches from in-home robots. But rather than using technical jargon, Hannah frames these glitches as ‘social mistakes.’ This understanding is distinctly humanizing. This anthropomorphization of robots creates a new non-human model of personhood, as Richardson’s historical analysis of robotics reveals: robots were “never imagined as a pure machine or a pure human,” instead, they are “liminal”: “existentially ‘betwixt and between’ humans and machines” (2016, 122-23). She points towards the first fictional imaginations of robots, which “were not made of machine parts; rather, they were made of human parts by machines.” This uncanny resemblance gives rise towards what Richardson describes as “technological animism,” in which a new “conceptual model of personhood” emerges “in the interaction between fiction, robotics, and culturally specific models of personhood” (2016, 111). Likewise, the robots Hannah describes are liminal. With the human capacity of making a social mistake, they transcend the category of machine, and form an uncanny, yet distinctly non-human resemblance of the human.

Key to technological animism is the reliance on fiction in “making, designing, and imbuing robots with animistic qualities.” Hannah’s favorite robot movie is Ron’s Gone Wrong (2021), which narrates the story of a socially awkward schoolboy named Barney who receives a robot named Ron—“a walking, talking, digitally connected device that’s supposed to be his best friend” (as the movie’s blurb describes). “Ron basically is a robot that is mal-manufactured—it was missing friendship software.” Interestingly, Hannah says this: “Ron actually is a robot that makes social mistakes. Now that we’re having this conversation, it might have subconsciously affected my work.” Similar to how fictional visions of robotics have influenced their design in laboratories, Hannah’s preferred robot films distinctly shaped her desire to understand robots making social mistakes.

Once, Hannah presented her proposed experiment for feedback to four lab members and myself. Her laptop displayed a graphical user interface of a game with numbered, gridded squares (reminiscent of Snakes and Ladders). In the game, two humans and a robot all race to the final square, rolling dice to determine the number of squares they move forwards. Red and green squares are dispersed throughout the board: if a player lands on red, they must send another player three squares backwards; if a player lands green, they can send another player three squares forward. This arrangement would create opportunities for teaming and betrayal between players—the robot would commit a “social mistake” by verbally agreeing to team with a human player and later betraying them.

I played Hannah’s game with another lab member, Lisa, who refused my proposed alliance, sending me backwards when she lands on red and the robot forward when she lands on green. Upon observation, another lab member argues that the experiment should focus more on how humans feel when another human chooses to collaborate with a robot instead.

Hannah was no stranger to such feedback. She has previously altered the context of her experiment. Previously, her study was about a cheating robot, but this raised concerns: if humans responded positively to robots cheating, the results would implicitly endorse cheating. Hannah also tried framing the experiment around trust repair, but this was too confining. Her most interesting framing was “understanding prosocial behavior towards bad entities.” She had wanted her robot to be the “bad guy” and see whether people would trust it. “People often sympathize with villains like Maleficent, I wanted to understand why,” she said.

Returning to Haraway’s model of the companion species, we’ve revealed a relationship of simultaneity between humans and robots. First, human-created fictions affect the research conducted in the lab. Second, the design choices and research questions—elements forming the ‘narrative’ of human-robot interaction—poses real stakes towards humans themselves. Stories of robot cheating could pose ethical considerations of human dishonesty and misconduct; stories of robots as ‘bad guys’ work towards an understanding of human sympathies towards villains in literature.

Furthermore, these varied narrative framings were equally applicable to the same game which Hannah had created. Yet, different narrative framings not only lead to different modes of result collection but also the emergence of different implications for human-robot and human-human relations. We might describe this as the “embodiment of material artifacts,” where artifacts embody concepts, “because concept and artifact engage each other in continuous feedback loops” (Hayles 2010, 328; 1999, 15). In Hannah’s robotics research concept (the narrative framing) and artifact (the robot) continuously shape one another. Concept shapes the software development and the physical actions of the robot. Recalling that the robot’s embodiment of concepts such as dishonesty and villainy gave rise to ethical issues (and thus, new concepts), artifacts simultaneously shaped the concepts Hannah explored.

Conclusion

Approaching robots as companion species, we find the significance of storytelling in robot experiments. We find that science fiction fundamentally shapes the research questions asked, while research findings pose serious considerations regarding humans. In robotics research, then, humans and robots are constantly re-constructed, re-shaped, and re-defined by one another. Analysis of robotics research then suggests a framework of relationality between humans and non-humans: how do we modify non-human subjects, and how do they modify humans? More broadly, however, the significance of narrative in robotics research would emphasize that science fiction writers be careful of the stories they create. We ought to avoid stories which might encourage problematic treatment of humans in socio-robotic systems.


References

Guevarra, Anna Romina. 2015. “Techno-Modeling Care: Racial Branding, Dis/Embodied Labor, and ‘Cybraceros’ in South Korea.” Frontiers: A Journal of Women Studies 36 (3): 139–59. https://doi.org/10.5250/fronjwomestud.36.3.0139.Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, Ill: The University of Chicago Press. Hayles, N. Katherine. 2004. “Refiguring the Posthuman.” Comparative Literature Studies 41(3): 311–16. http://www.jstor.org/stable/40247415.Hayles, N. Katherine. 2010. “How We Became Posthuman: Ten Years On An Interview with N. Katherine Hayles.” Paragraph 33(3): 318–30. http://www.jstor.org/stable/43151854.Haraway, Donna Jeanne. 2020. The Companion Species Manifesto: Dogs, People, and Significant Otherness. Chicago, Ill.: Prickly Paradigm Press.Hsu, Jeremy. 2009. “Real Soldiers Love Their Robot Brethren.” NBCNews.com, May 21. https://www.nbcnews.com/id/wbna30868033.Koerth-Baker, Maggie. 2013. “How Robots Can Trick You Into Loving Them.” The New York Times, September 17. https://www.nytimes.com/2013/09/22/magazine/how-robots-can-trick-you-into-loving-them.html.

Kwak, Sonya S., Yunkyung Kim, Eunho Kim, Christine Shin, and Kwangsu Cho. 2013. “What Makes People Empathize with an Emotional Robot?: The Impact of Agency and Physical Embodiment on Human Empathy for a Robot.” 2013 IEEE RO-MAN, 180–85. https://doi.org/10.1109/roman.2013.6628441.Latour, Bruno, and Steve Woolgar. 1986. Laboratory Life: The Construction of Scientific Facts. Princeton, New Jersey: Princeton University Press.Lobel, Orly. 2023. “In Japan, Humanoid Robots Could Soon Become Part of the Family.” Freethink, January 28. https://www.freethink.com/robots-ai/humanoid-robots-japan.

Nyholm, Sven. 2017. “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.” Science and Engineering Ethics 24(4): 1201–19. https://doi.org/10.1007/s11948-017-9943-x.

Richardson, Kathleen. 2016. “Technological Animism: The Uncanny Personhood of Humanoid Machines.” Social Analysis: The International Journal of Social and Cultural Practice 60(1): 110–28. http://www.jstor.org/stable/24718341.Vora, Kalindi, and Neda Atanasoski. 2020. “Why the Sex Robot Becomes the Killer Robot – Reproduction, Care, and the Limits of Refusal.” Spheres. https://spheres-journal.org/contribution/why-the-sex-robot-becomes-the-killer-robot-reproduction-care-and-the-limits-of-refusal/.

Wulff, Julien. 2015. “Skeletonics – Science Fiction Becomes a Reality!” Design Made in Japan, March 5. http://designmadeinjapan.com/magazine/skeletonics-science-fiction-becomes-a-reality/.