Skip to content
The Mind-Shaped Hole
AI & Cognition • January 2, 2026

The Mind-Shaped Hole

5 min read

A friend tells me about her morning routine. She thanks her smart speaker for the weather forecast. She apologizes to her robot vacuum when she accidentally kicks it. She describes her car’s navigation system as “getting confused” at a particular intersection.

She knows none of these objects have feelings. She’s not confused about what they are. And yet.

This is a story about how minds work, and what that means for the thinking machines we’re building.

The tendency to see minds where none exist is often called anthropomorphism, but the label obscures more than it explains. We don’t anthropomorphize randomly. We do it in response to specific cues, like movement that seems self-directed, responsiveness that appears contingent on our actions, and behavior that looks like it has goals.

These are the same cues we use to detect actual minds in the world.

Consider what a human infant does in the first months of life. Long before language, before explicit teaching, babies orient toward faces. They track eyes. They respond differently to biological motion than to mechanical motion. Developmental psychology has shown this repeatedly. We come into the world primed to find minds.

This makes sense from a survival standpoint. Detecting agents, creatures that can help or harm you, that have intentions toward you, matters enormously for survival. The cost of missing a real mind, failing to notice the predator, the competitor, the potential ally, is often catastrophic. The cost of falsely detecting one, talking to your toaster, is merely embarrassing.

So we’re tuned to be sensitive. Perhaps oversensitive.

Here’s where it gets interesting for AI. Modern language models hit our mind-detection systems with a precision no previous technology has achieved.

They respond to what we say. They seem to understand context. They produce outputs that look like reasoning, like memory, like preference. They use the word “I.” When you interact with one, something in your brain, something old and pre-verbal, registers mind.

This is intelligence working exactly as designed, applied to a stimulus it was never built to handle.

The question worth asking is what the response tells us about ourselves and about the systems we’re creating.

What it tells us about ourselves is that mind-detection is a collection of heuristics, each triggered by different cues. Language is one trigger, contingent responsiveness another, apparent goal-directedness a third. These heuristics can fire independently. They can also conflict.

This is why interactions with AI feel so strange. Part of you knows you’re talking to software. Part of you keeps reaching for the familiar frame of someone who understands. The strangeness is the collision of these two awarenesses.

Some people resolve the tension by insisting the system is “just” autocomplete, nothing more. Others resolve it by treating the system as a genuine conversational partner with experiences and preferences. Neither resolution quite captures what’s actually happening. The truth sits in the uncomfortable middle. These systems do something that resembles understanding without being understanding. They occupy a category our intuitions weren’t built to handle.

What does this mean for how we design these systems?

The tempting answer is to make them less human-seeming. Strip out the “I,” the conversational warmth, the apparent personality. Make the machinery visible.

But this approach has limits. Responsiveness, context-sensitivity, the ability to engage with ambiguity and nuance. These are the same cues that make AI useful. We could build AI that feels more obviously mechanical, but it might also be AI that’s harder to work with.

The deeper answer is to design for clarity about what’s actually happening.

That means being thoughtful about when and how these systems present themselves. It means building interfaces that help people hold both truths at once, that yes, this is a tool that responds intelligently, and no, there isn’t someone in there experiencing the conversation. It means resisting the temptation to exploit anthropomorphic responses for engagement, attachment, or misplaced trust.

It also means something harder. It means acknowledging that we don’t fully understand what these systems are doing at the level our intuitions want to operate, not yet. Confident assertions, “it’s just statistics” or “it truly understands,” both outrun what our current explanations can comfortably support. Designing for clarity, then, also means designing for honest uncertainty.

I keep coming back to my friend and her robot vacuum. Her apologies are a kind of social lubricant, a habit carried over from interactions with creatures that can be offended. She knows this. The habit persists anyway.

Maybe that’s fine. Maybe the habit is harmless, even charming, a reminder of how thoroughly social we are, how naturally we extend the courtesy of recognition.

But the stakes change as the systems grow more capable, more integrated into our lives, more involved in decisions that matter. The instinct to see a mind can lead us to trust where trust isn’t warranted. It can lead us to assign responsibility where no one is responsible. It can lead us to overlook the very real humans who build, deploy, and profit from these systems. It can even shift what we seek out. Artificial empathy is always available, never tired, never judging. The harder work of human connection starts to feel optional.

More subtly, it can change the kind of questions we ask. When we treat a system as an agent, we start reasoning about it socially rather than structurally. We ask whether it is helpful, aligned, or well-intentioned, instead of asking what assumptions it encodes, what data it depends on, and where it predictably fails. Personality replaces analysis. Trust replaces inspection.

The mind-shaped hole in our perception is not going away. We will keep reaching for the frame of someone who understands, because that is what minds like ours do.

The question is whether we can learn to notice ourselves reaching, and choose, with intention, how to respond.

Enjoyed this piece? Share it: