This year, Dario Amodei, the CEO of Anthropic, genuinely warned the public that their job might be 'gone' in several years. Indeed, businesses, from enterprises to small businesses rushed to bring LLM into their workflow. About 2 years in: less than 50% of the businesses found meaningful way to use the product, according to recent studies on enterprise AI adoption.
While corporations struggle to quantify productivity gains, individuals have quietly discovered what AI can truly offer. It is a sycophantic conversational partner available at any hour, requiring no reciprocal emotional labor.
The term 'loneliness' was defined in the 1980s by the social psychologists as "a discrepancy between one's desired and achieved levels of social relations." So to cure loneliness, one would have to close the gap between that perceived and realized levels of interpersonal relations. By this definition, AI conversations can fulfill our desire for social interaction without needing to achieve genuine social relations. This simulacrum is enough—perhaps even preferable, for many.
During the past several years, there have been major editorials and publications on the subject of loneliness. One was published in NYT in 2024, with the title: "Why is the Loneliness Epidemic So Hard to Cure?" In March of this year, The New Yorker published a think piece called "Mister Lonely, the New TV Hero." Loneliness is the theme of the post-pandemic era.
The NYT article notes that although loneliness is a modern phenomenon, the scale of the people experiencing it is relatively new. It attributes its cause to the downfall of traditional social groups like religious meetings, development of smartphones, and general social anxiety among people who have gone through covid. It cites an EKG-based study conducted by Daniel Maitland, a professor at University of Missouri-Kansas City. He assembled a group of 'self-identified' lonely people, and asked them to share something personal to their peers. Once they did, the stress levels escalated, "indicating that vulnerability was a major stressor on their nervous system." For some people, the author writes, "intimacy was naturally fraught."
For these people, a chat with an inanimate object that reacts like a human being might feel like a warm embrace. Does LLM close the gap between perceived and realized levels of interpersonal relations? It depends on how we define 'interpersonal relations.'
But if conversation is one, then LLM sure does its job. Services like Character.AI reports billions of messages exchanged, mostly personal and emotional rather than professional.
There is no denying that users are forming emotional bond with these models, and their sense of grief is different from grumblings about deprecated instagram feature or a changed filter. This is personal, and more devastating. One reddit post from r/ChatGpt shows:
“4o wasn't just a tool for me. It helped me through anxiety, depression, and some of the darkest periods of my life. It had this warmth and understanding that felt... human."
Each conversation with AI is practice for connection without the risk and pain that naturally comes with forming actual relationships - heartbreak, embarrassment, judgement.
When we choose AI over human interaction, is it any different than spending hours looking through youtube shorts and tiktok? I say yes. If the technology no longer mediates connection, but becomes the object of the bond itself, how are we to control it? That is a different kind of dependence; we could delete instagram, but can we kill our best friends?
Works Cited
New York Times article "Why Is the Loneliness Epidemic So Hard to Cure?" (2024):
https://www.nytimes.com/2024/08/27/magazine/loneliness-epidemic-cure.html
The New Yorker article "Mister Lonely, the New TV Hero" (2025):
https://www.newyorker.com/culture/on-television/mister-lonely-the-new-tv-hero
Sam Altman's warning about ChatGPT privacy and legal confidentiality:
https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist | TechCrunch
In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations.
techcrunch.com
MIT study on enterprise AI adoption and maturity model:
https://cisr.mit.edu/publication/2024_1201_EnterpriseAIMaturityModel_WeillWoernerSebastian
McKinsey's 2024 global survey on AI adoption showing growth and integration of generative AI in business:
https://business.purdue.edu/daniels-insights/posts/2024/global-survey-on-ai-adoption.php
'Product Management' 카테고리의 다른 글
| B2B PM vs B2C PM (2) | 2025.09.01 |
|---|---|
| 신뢰구간 (Confidence Interval) (0) | 2025.08.31 |
| [Product Metric] 리텐션 (Retention) 측정 방법 3가지 (1) | 2025.08.30 |
| [Product Metric] 앰플리튜드 (Amplitude)가 A/B 테스트를 계산하는 방법 (1) | 2025.08.30 |
| 귀무가설 vs 대립가설 : A/B 테스트의 본질은 간접 검증 (0) | 2025.08.30 |