Header image

Session 3B

Tracks
Track 2
Thursday, December 4, 2025
13:00 - 14:20

Speaker

Phd Jeanette Landgrebe
Aarhus University

“ I LOVE YOU” – The Case of Human-AI Attachment

Abstract


This paper examines emotional displays between humans and AIs from a social interactional perspective. Specifically, it explores how human experiential reality—observable in and through courses of action involving conversational AI—unfolds within distinct social realities, particularly in the domain of emotional expression. The anthropomorphization of technology is a well-documented social phenomenon. One crucial aspect of this process is the human inclination toward forming social bonds (Salles et al. 2020), which emerges early in life when children seek comfort and security from objects such as teddy bears and blankets. This form of object-bonding plays a fundamental role in developing emotions, social skills, and empathy (Chang-Kredl et al. 2024). When humans establish—or attempt to establish—emotional connections with AI, their perceptions of sociality, humanness, intelligence, and sentience are reconfigured. This paper adopts an emic perspective to investigate the depth of emotional connections that humans appear to form with conversational AI, ranging from unimodal digital AI systems to multimodal, human-like virtual AI agents. The study is based on a mixed dataset, incorporating fictional and documentary data that capture emotion-centric moments in human-AI interactions, as well as naturally occuring human-AI conversations where emotional aspects shape the interaction. Beyond analyzing the interactional emotional dynamics between humans and AI, human perceptions of these experiences are also examined. As AI technology reshapes our understanding of and boundaries between human and artificial emotional attachments, ethical considerations surrounding these evolving social realities emerge. How far can—or should—emotional bonding between humans and AI be taken?
Professor Brian Due
Professor
University Of Copenhagen

The naturalness of anthropomorphism: a theoretical discussion based on video-recorded human-humanoid interactions.

Abstract

Anthropomorphism is attributing human traits, emotions, or intentions to non-human entities. People have routinely, throughout history, attributed human emotions and behavioural characteristics to animals. Rather than being inherently problematic, anthropomorphism has consistently helped humans relate meaningfully to gods, animals, nature, and machines. Psychological theories argue it to be a natural mode of thinking. Rather than a pathology, it is an extension of our theory of mind (Guthrie, 1997). Anthropomorphism has gained new attention in the wake of social robots and humanoids in particular. Instead of trying to demarcate what is “real”, what is a “real” agent and what is a “real” relation, we seek in this presentation to explore the natural fluidity of producing realness and agency through occasioned, situated processes of sociomaterial engagement. Instead of discussing anthropomorphism from a cognitive, historical or psychological point of view as either a good or bad thing, we depart in this presentation from video-recorded data of human-humanoid interactions in social environments as the baseline for arguing the relevance and realness of local ontologies (Holbraad & Pedersen, 2017). We will, based on ethnomethodological conversation analytical methods with an openness towards distributed agency (Due, 2024), show the temporal shifts and sociomaterial practices of forming different kinds of human-humanoid assemblages in which the robot is moment-by-moment naturally anthropomorphised or treated as dead material.


Due, B. L. (2024). Situated socio-material assemblages: Assemmethodology in the making. Human Communication Research, 50(1), 123–142. https://doi.org/10.1093/hcr/hqad031
Guthrie, S. E. (1997). Anthropomorphism: A Definition and a Theory. I R. W. Mitchell, N. S. Thompson, & H. L. Miles (Red.), Anthropomorphism, Anecdotes, and Animals (s. 50–58). SUNY Press.
Holbraad, M., & Pedersen, M. A. (2017). The Ontological Turn: An Anthropological Exposition. Cambridge University Press. https://doi.org/10.1017/9781316218907
Prof Tanya Karoli Christensen
University of Copenhagen

Sure I can help you! Language as a trigger for anthropomorphization

Abstract

Conversational ‘AI’, such as Claude or ChatGPT, is designed in a way that amplifies the human tendency to anthropomorphize technology (Reeves & Nass 1996). Among factors fueling this tendency are chat-formatted interfaces and personalized language, such as first- and second-person pronouns (Abercrombie et. al 2023; Li et al. 2024; Cheng et al. 2025). Furthermore, the predictive models behind chatbots allow them to respond in ways that replicate human writing patterns, leading even experienced tech users to report an eery feeling of sentience in chatbots (e.g. Roose 2023).
In this talk, we use cognitive-functional linguistics to explain in more detail how language can trigger anthropomorphization. A fundamental point is that words and grammar only provide cues to meaning and require semantic and pragmatic enrichment to be contextually meaningful (Grice 1975; Levinson 2024). This enrichment relies on inferences based on our mental models and it gives rise to a host of expectations about our interlocutors and the activity we are engaged in (Tannen 1993).
On this basis, we examine data from a recent study on human-chatbot interactions to explore whether chatbot users evince anthropomorphizing behavior and how it relates to their self-reported attitudes towards the technology. While users often employ language that presupposes intelligence (do you know, are you sure?, are you stupid?!), most express a high awareness of the technology’s limits. For instance, several use politeness formulae (can you, please, thank you) as a prompting strategy rather than as an acknowledgment of the existence of another person.
loading