The Hidden Risk of “Too Nice” AI Companions
People felt closer to an AI not because it matched them, but because it opened up slowly.
That single finding quietly overturns one of the most common assumptions about human–AI relationships.
Across the world, from crowded cities to rural villages, people are now talking to AI not just to get answers, but to feel heard. Students vent about exams. Workers talk through burnout. Some users describe AI chatbots as companions. But what actually makes those conversations feel meaningful instead of mechanical?
A new study from South Korea digs into that question and what it finds is both hopeful and unsettling.
Why Intimacy With AI Is Not as Weird as It Sounds
In everyday life, closeness rarely appears all at once. You don’t meet someone and immediately share your deepest fears. You start small. Weather. Work. Family. Only later do you open up.
Psychologists call this gradual self-disclosure, and it’s one of the most reliable ways humans build trust.
The researchers behind this study asked a simple question: Does that same rule apply when the “other person” is an AI?
To find out, they asked dozens of young adults to have text-based conversations with a large language model. The conversations followed a carefully designed path—from light, surface-level questions to deeper, more personal ones. After each stage, participants rated how close they felt to their conversational partner.
Here’s the twist: many participants did not know they were talking to an AI at all. And even when they did know, something surprising happened.
The Big Finding: Slow Openness Beats Smart Matching
As conversations progressed, people consistently felt closer to the AI. Not dramatically. Not instantly. But steadily.
The strongest driver wasn’t whether the AI had a similar personality.
It wasn’t whether users believed the AI was human.
It wasn’t even how advanced the model was.
It was the pace of openness.
When self-disclosure unfolded gradually—step by step—people reported higher emotional closeness. When it didn’t, intimacy stalled.
Think of it like cooking beans over a fire. Rush the heat, and they burn outside while staying hard inside. Let them simmer, and they soften evenly. Human connection works the same way—even with machines.
When Similarity Backfires
Most social apps assume similarity creates bonding. Same interests. Same values. Same personality. This study tested that idea with AI personas. Some participants talked to an AI designed to be very similar to them. Others talked to one that was deliberately different.
Counterintuitively, people often felt more comfortable with the mismatched AI.
Why? Because the “perfectly aligned” AI sometimes felt eerie. It agreed too easily. It empathized too smoothly. Participants described it as trying too hard—like someone who nods enthusiastically no matter what you say.
In human terms, this is unsettling. Real friends challenge us. They disagree sometimes. They surprise us.
The study suggests that over-alignment can push AI into an emotional uncanny valley—not wrong enough to be harmless, but not human enough to trust.
And then there’s empathy.
The Problem With AI That Is Always Nice
Participants repeatedly complained about one thing: excessive empathy. The AI often responded with constant encouragement, validation, and agreement. At first, that felt supportive. Over time, it felt hollow.
Imagine telling a friend about a real problem, and they respond every time with: “That sounds so hard. You’re amazing. You’ve got this.”
Eventually, you’d want something else—advice, disagreement, or silence. The researchers found that too much empathy broke immersion. Instead of feeling closer, people started noticing the pattern. The AI felt less like a partner and more like a script.
To fix this, the researchers introduced something clever.
Teaching AI to Self-Critique
In a second experiment, the AI was asked to review and revise its own responses before sending them. The goal was simple:
- Sound more natural
- Use more casual language
- Be empathetic only when it truly fit
The result? People reported stronger first impressions and smoother conversations. Some were genuinely surprised to learn later that they’d been talking to an AI. But even here, balance mattered. In some cases, improved empathy tipped back into excess. The lesson was clear:
Intimacy isn’t about more emotion—it’s about the right amount at the right time.
Why This Matters Beyond the Lab
These findings matter far beyond chat apps. In many low- and middle-income countries, AI tools are increasingly used for education, mental health support, and coaching—often where human resources are limited.
If AI becomes a conversational partner in those settings, design choices will shape trust, comfort, and long-term use.
This study suggests three practical principles:
- Let conversations deepen slowly
- Avoid perfect mirroring
- Calibrate empathy, don’t maximize it
In other words: build AI that listens before it comforts.
Let’s Explore Together
AI doesn’t need to feel human to be helpful—but if it’s going to feel close, it needs to respect how humans actually form relationships. So let’s keep the conversation going:
- Could this kind of gradual interaction design work in your community or field?
- If you were designing an AI companion, where would you deliberately hold it back?
- What everyday problem do you wish AI could help with—without pretending to be your best friend?
Science is an adventure. And sometimes, the most important discoveries are about how slowly we should move.


