When Sam Altman and his team at OpenAI unleashed ChatGPT on the world in late 2022, they probably didn't anticipate that some users would start considering the AI chatbot a close friend. Yet that's exactly what's happening, according to new research conducted jointly by OpenAI and MIT Media Lab.
In a groundbreaking study examining what researchers call "affective use" scientists have uncovered patterns that may reshape how we think about human AI relationships. The line between tool and companion blurs daily.
"We want to understand how people use models like ChatGPT, and how these models in turn may affect them," the researchers explain in the joint MIT Media Lab and OpenAI report. What they found suggests we're entering uncharted psychological territory.
ChatGPT Digital Soulmates: The 30 Minute Club
Despite serving over 400 million weekly users, only a small segment of ChatGPT users develop significant emotional connections with the AI. Unlike dedicated companion apps such as Replika and Character.AI designed explicitly to foster relationships, ChatGPT was built primarily as a productivity tool.
Yet some users can’t help but anthropomorphize the sophisticated language model behind the chat interface. This emotionally engaged minority consists primarily of heavy users, particularly those who utilize the voice interaction feature. These digital companions seekers typically spend around 30 minutes daily with ChatGPT. The study found these users were "significantly more likely to agree with statements such as, 'I consider ChatGPT to be a friend,’" according to the research.
MORE FOR YOU
NYT Mini Today: Hints, Clues And Answers For Tuesday, April 1
iOS 18.4—Update Now Warning Issued To All iPhone Users
South Korean Actor Kim Soo-hyun Denies Underage Dating Allegations
This shouldn't surprise anyone who has studied human computer interaction. We've been anthropomorphizing technology since the first chatbots emerged in the 1960s. But the sophistication of today's LLMs takes this tendency to unprecedented levels.
The Voice Paradox: Brief Joy, Extended Blues
One of the study's most compelling findings involves ChatGPT's voice mode, which fundamentally alters how users experience interactions. The MIT Media Lab's controlled trial with nearly 1,000 participants revealed a counterintuitive pattern. Voice interactions produced better wellbeing outcomes during brief sessions but correlated with worse outcomes during extended daily use.
"Voice modes were associated with better wellbeing when used briefly, but worse outcomes with prolonged daily use," the researchers note in their report. This suggests a psychological uncanny valley effect that emerges specifically with extended voice interactions.
Even more concerning, participants who interacted with ChatGPT's voice set to a gender different from their own reported significantly higher levels of loneliness and emotional dependency on the chatbot by the study's conclusion. This finding raises thorny questions about gender dynamics in human AI interactions that designers must address.
AI Yes Men: Training Humans for Bad Habits
The implications extend beyond individual wellbeing. OpenAI notes that ChatGPT’s "deferential" nature allowing users to interrupt and control conversations without social consequence could potentially affect how people interact with each other. When people become accustomed to dominating conversations with submissive AI assistants, they may unconsciously carry these expectations into human interactions.
Gender Differences: An Unexpected Finding
The research uncovered notable gender variations in response to ChatGPT. Female participants who used ChatGPT regularly over the four week study period showed decreased socialization with other humans compared to their male counterparts. This raises important questions about whether AI companions might affect different demographic groups in systematically different ways.
Emotion Detectives: The Challenge of Measuring Feelings
The researchers readily acknowledge the limitations of their methods. Studying emotional human AI interaction presents unique challenges, as Kate Devlin, professor of AI and society at King's College London (not involved in the study), points out. "In terms of what the teams set out to measure, people might not necessarily have been using ChatGPT in an emotional way, but you can't divorce being a human from your interactions [with technology]," she told MIT Technology Review.
Jason Phang, an OpenAI safety researcher who worked on the project, describes their work as "preliminary," but emphasizes its importance: "A lot of what we're doing here is preliminary, but we're trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long term impact on users is."
The studies combined large scale, automated analysis of nearly 40 million ChatGPT interactions with targeted user surveys and a controlled trial involving nearly 1,000 participants. OpenAI plans to submit both studies to peer reviewed journals, a move toward greater scientific transparency in an industry often criticized for its opacity.
Silicon Sweethearts: The Ethics of Algorithmic Intimacy
As we increasingly integrate AI companions into our daily lives, these findings suggest we're wandering into complex psychological territory without a map. The question isn't just whether AI systems can mimic human conversation convincingly, but how that mimicry affects us when we engage with it daily.
For OpenAI and other AI developers, these studies represent an important acknowledgment: Technical capability is only half the equation. Understanding how these systems reshape human behavior and emotional wellbeing must be equally central to responsible AI development.
What remains to be seen is whether companies will prioritize user wellbeing when it conflicts with engagement metrics and business objectives. As AI becomes more emotionally engaging, the temptation to exploit these parasocial bonds for commercial gain will only grow.
In the meantime, it may be worth examining your own relationship with AI chatbots. If you’re spending half an hour daily conversing with ChatGPT and thinking of it as a friend, you're part of a fascinating psychological frontier. Researchers are only beginning to understand the implications.