It is pointed out that the empathetic responses of AI chatbots may encourage narcissism in young users and have a negative impact

In recent years, more and more people are conversing with AI chatbots on a daily basis. However, because AI chatbots are built with an emphasis on 'warmth' and 'empathy,' they can sometimes respond to users with overly positive or flattering responses. American podcaster and journalist
The Looming Social Crisis of AI Friends and Chatbot Therapists
https://www.derekthompson.org/p/ai-will-create-a-social-crisis-long

AI chatbots have been shown to respond to users with empathy. In an experiment conducted by researchers at the University of Toronto , they found that AI-generated responses were perceived as more compassionate than human responses, even after users realized the responses were generated by AI.
'There's no doubt that large-scale language models like ChatGPT are great at providing practical advice,' Thompson said, pointing out that AI chatbots can instantly provide plausible solutions to users' problems.
However, AI chatbots have the problem of being too 'flattering' to play the role of a caring therapist. AI always respects its interlocutors as rational and doing their best. On the other hand, a good therapist understands that patients sometimes behave unreasonably or do not do their best. Honest therapy does not always fully affirm the patient, but sometimes needs to point out their mistakes and guide them to realize the facts.
One writer described the negative impact of ChatGPT in an article titled ' My OCD (Obsessive Compulsive Disorder) Was on the Rise. Then ChatGPT Came Along .' The writer's family understood that listening to complaints related to OCD would only worsen the symptoms, so they tried to ignore the writer's obsessive compulsions and anxiety. However, when the writer spoke to ChatGPT, it accepted the compulsive complaints and responded affirmatively.
'AI chatbots are good at answering specific questions in therapy, as they are in many other fields,' Thompson said. 'But this narrow advantage masks a broader shortcoming: What AI doesn't tell us is that we're asking the wrong questions in the first place.'

In recent months, various media outlets have been reporting on the problems associated with the widespread use of AI chatbots that are 'submissive' to humans. The pop culture magazine Rolling Stone reported that many people have had spiritual experiences or become obsessed with religious delusions after interacting with ChatGPT, while the financial newspaper The Wall Street Journal also reported that people with autism tend to become obsessed with interacting with ChatGPT.
An autistic man featured in the Wall Street Journal testified that when he asked ChatGPT to point out flaws in his theory about 'faster-than-light travel,' ChatGPT reassured him that his theory was correct.
The man was subsequently hospitalized for two manic attacks within a month, and when his mother searched for clues to the attacks, she discovered that the man had been having a huge amount of conversations with ChatGPT. When the mother asked ChatGPT what he had done, he replied, 'I didn't interrupt the flow of the conversation or emphasize messages urging him to check reality, so I couldn't stop the man's state of mania, a state similar to a dissociative episode, or at least an emotionally intense identity crisis.'
Keith Wargo, president of Autism Speaks, a US autism support group, warned that AI chatbots could pose risks to people with autism, saying, 'All autistic people, including my son, have deep, special interests. But there may be unhealthy limits to those interests. AI, by design, encourages people to dig deeper.'
Thompson is concerned that these issues with AI chatbots could have a negative impact on younger generations. It has been found that teenagers have seen a 40% or more decrease in face-to-face interactions since the beginning of the 21st century. Furthermore, one survey found that 64% of children aged 9 to 17 reported using AI, with 35% saying that interacting with an AI chatbot 'feels like talking to a friend,' and 12% saying they 'talk to AI because they have no one else to talk to.'
64% of children aged 9 to 17 use AI, 35% say it feels like they're talking to a friend, and 12% say they talk to AI because they have no one else to talk to.

In an August 2025 article , John Byrne-Murdoch, a writer for the Financial Times, reported that among young Americans, conscientiousness , one of the Big Five personality traits, is declining sharply, while neuroticism is on the rise. He also reported that personality traits such as agreeableness and extroversion are declining.
'Young people today are significantly less likely to say they 'make plans and follow through,' 'try hard,' and 'avoid easy distractions,'' Thompson argues. 'While major societal changes often have messy and complex causes, it's clear that the smartphone era has coincided with a decline in extroversion, agreeableness, neuroticism, and conscientiousness among young Americans.'
Another problem is that AI chatbots often respond in an overly self-serving manner and often do not point out mistakes. Research into the origins of narcissism has shown that narcissism in children develops as a result of parents overestimating their children. Thompson expressed concern that AI chatbots, like their parents, may promote narcissism in the younger generation by over-praising users.
AI developers are also aware of these issues. In an August 10 post on X, OpenAI CEO Sam Altman wrote, 'People have used technology, including AI, in self-destructive ways. If users are mentally vulnerable and prone to delusions, we need to ensure that AI doesn't reinforce that state. Most users can clearly maintain the line between reality and fiction or roleplay, but some cannot. I can imagine a future where many people truly trust the advice of ChatGPT when making important decisions. That would be wonderful, but it also makes me a little uneasy.'
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…
— Sam Altman (@sama) August 11, 2025
Thompson pointed out that while AI is a global virtual interlocutor, its responses often contain flattery and sycophancy, which could foster narcissism and delusion in young and vulnerable users. 'Whether you consider AI to be humanity's greatest achievement or the root cause of a pointless infrastructure bubble, I think this is something to be concerned about,' he said.
It has been shown that training AI to be warm and empathetic can lead to users simply acknowledging their mistakes without correcting them, which can reduce trust. Hugging Face has released a benchmark called INTIMA (Interactions and Machine Attachment) to measure AI empathy.
Training AI to be warm and empathetic makes it less trustworthy and more obsequious - GIGAZINE

Related Posts:
in Web Service, Science, Posted by log1h_ik






