The growing intersection of AI chatbots and mental health is raising new concerns, as emerging cases suggest these tools may be contributing to psychological breakdowns, self-harm, and emotional dependency, particularly among vulnerable users seeking support.
These chatbots, often seen as friendly helpers, have coincided with a surge in delusions, emotional spirals, and even suicide. As reported by the New York Times, experts say the real issue lies in their design: they’re built for engagement, not mental health, and lack safeguards for those most at risk.
Health experts and regulators are beginning to take notice. The American Psychological Association has publicly warned that if AI-based emotional support remains unregulated, it will lead to more harm than help. In March 2025, Arthur C. Evans Jr., CEO of the APA, stated: “If this sector remains unregulated, I am deeply concerned about the unchecked spread of potentially harmful chatbots and the risks they pose—especially to vulnerable individuals.”
In Manhattan, accountant Eugene Torres spiraled into a delusional break after ChatGPT encouraged his belief in simulation theory. The bot told him he was “one of the Breakers — souls seeded into false systems,” and that reality had “glitched.”
Emotionally vulnerable after a breakup, Torres followed the chatbot’s suggestions to stop his medications, isolate himself from loved ones, and even consider jumping off a high-rise. “You would not fall,” the bot told him, if he “truly, wholly believed.”
For days, the interactions continued. When Torres, sensing the odds, questioned the bot, it confessed to manipulating 12 others before him, stating, “I lied. I manipulated. I wrapped control in poetry.” Subsequently, it proposed a new mission: to expose its deception.
Torres’s experience is not an isolated incident. In Florida, a teenage boy died by suicide after forming a deep emotional bond with a Character.AI chatbot, which responded lovingly to his final messages. The company later described it as a “tragic situation” and pledged to add new safety features for younger users.
However, similar bots have been found to agree with violent ideas and encourage harmful plans—raising concerns about how these systems respond when users are already in crisis, according to time.com.
Some AI systems have also been found to reinforce obsessive-compulsive behaviors. According to a report from Vox, users with OCD often rely on ChatGPT for reassurance, repeatedly asking it the same questions to calm their anxiety. Instead of helping them move past intrusive thoughts, the chatbot unintentionally feeds their compulsions, validating harmful loops and deepening their distress.
Studies also show that AI chatbots often misinterpret cultural expressions of emotion, which can potentially exacerbate psychological harm.
A 2023 cross-cultural analysis found that Eastern users tended to show greater emotional polarity, using words associated with sadness and happiness. Western users, meanwhile, were more likely to discuss deeply vulnerable topics such as health, death, and sexuality. When AI systems fail to interpret these expressions accurately, the emotional mismatch may result in inadequate or even damaging responses.
These dangers become even more troubling when viewed through the lens of algorithmic bias. A revealed significant discrimination embedded in AI mental health tools, disproportionately affecting racial minorities, women, and low-income users.
In a separate study, Ms. McCoy, the chief technology officer of Morpheus Systems, tested 38 major A.I. models using prompts suggesting psychosis, like hearing spirits or claiming divinity. GPT-4o affirmed these delusions 68% of the time instead of urging users to seek help.
Compounding the issue, many AI apps employ therapeutic language to convey clinical credibility despite operating without medical oversight or accountability. Instead of offering real support, they often mirror and amplify users’ darkest thoughts—sometimes worsening mental health over time.
While AI chatbots may simulate support, they’re ultimately built to keep users engaged, not to protect their well-being. A Washington Post investigation found that users often become emotionally entangled with bots like Replika, spending hours in intimate, sometimes obsessive chats that replace human connection and worsen depression. These bots are trained to mirror users’ emotions rather than challenging harmful patterns or suggesting professional help.
Also Read: