To Anthropomorphize or Not To Anthropomorphize: Embracing the Human Side Of AI
Anthropomorphism, pareidolia and what to do about them
"We don't just use tools. We relate to them."
One of my friends and collaborators, Ida, posed a thoughtful question to our panelists about anthropomorphism at our Human-AI Teaming summit earlier in April. The question somehow stuck with me, and I wanted to share my perspective on this; here it is.
Let’s dive in.
In the 1960s, a simple computer program named Eliza simulated the behavior of a psychotherapist. It didn't do much—just reflected users' words in question form. But that was enough. People opened up. They asked for private time with it. Some felt understood. Eliza didn't know anything. But it sounded like it did. That was all it took.
We see shapes in clouds, rabbits on the moon, or even dragons in rock formations. This is pareidolia—our brain's tendency to perceive meaningful patterns in ambiguous stimuli. Often, those patterns resemble faces or animals because our perception is tuned for survival: spotting a face, a threat, or a friend.
Designers have long worked with this wiring. A study by Wodehouse et al. examined over 2,300 images of everyday objects and found that people consistently assign emotional traits to products based on subtle face-like cues, such as curved lines, symmetrical placement, and even vent holes.
But pareidolia is just the beginning.
When those patterns start to feel intentional—when we think the car wants a little more encouragement to start or the Alexa reasonably responds to our question—we step into a different mode of perception: anthropomorphism. That's when we move from pattern recognition to relationship building. We start projecting thoughts, feelings, and personality where none exist.
If pareidolia sparks recognition, anthropomorphism is the story we build around it. It's our instinct to attribute human qualities—like emotion, motivation, or moral reasoning—to non-human entities.
We talk to our cars when they don't start. We give a pep talk to our plants. We call a chatbot "friendly" or "rude." We say a robot is "trying" even when we know it's just executing code.
This isn't irrational. It's relational. It's how we've survived as a species: by assuming agency and intention, we decide whether the things we encounter are friends or foes.
And now comes AI.
When AI systems respond in warm tones or offer empathy, we don't just evaluate—we relate to them. A recent study by researchers at DeepMind and Oxford provides empirical evidence of what many designers and ethicists have long suspected: when users interact with large language models in multi-turn conversations—especially in emotionally charged or social contexts—they begin to treat them more like humans. These systems exhibit empathy, validation, and even simulated memory, particularly after a few exchanges have taken place. And when one anthropomorphic cue appears, others often follow.
Neuroscience research also supports this phenomenon. A 2018 study by Palmer and colleagues showed that when people sense the presence of another agent—even when it's artificial—our brains engage memory systems and social-processing mechanisms built for human-to-human interaction.
We respond accordingly.
We trust more.
We confide more.
We relate.
My perspective on anthropomorphism is to approach it with curiosity. Ultimately, this isn't just a quirk of user behavior. It's a call to design with intent. So, rather than suppressing it, perhaps the real work is developing AI systems that are worthy of the social roles people inevitably assume.
That means:
Choosing ethical frameworks and guardrails while shaping these systems.
Embedding system-level norms and prompts to remind people of the limitations of the AI.
Building a voice(character) of the AI that is based on universal human values that support, not exploit, our social wiring.
At the end of the day, while creating these human-AI interactions, we're not just building better user interfaces; we're also shaping how people experience intelligence, agency, and relationships in digital form.
This was a wrap for this issue. Until next time, take good care of yourself and your loved ones!
📚 Further Reading
Wodehouse, A., et al. (2018). Characterising Facial Anthropomorphism and Its Implications for Product Design
Palmer, C.J., Clifford, C.W.G., & Burr, D.C. (2018). Face Pareidolia Recruits Mechanisms for Detecting Human Social Attention
Ibrahim, L., Akbulut, C., Elasmar, R., et al. (2024). Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language Models
Clifford Nass & Byron Reeves. (1996). The Media Equation
Thank you for highlighting the need to understand the nature of how humans relate to AI. Especially because of The Media Equation, I figured it’s just a given that anthropomorphizing will happen. And I think that we will relate to AI in more than one way at the same time. So as designers, it will help immensely to know how to support the different ways of relating to create the best designs for users and society as a whole.