How AI chatbots can amplify delusions
An overreliance on friendly AI chatbots can dangerously blur the line between reality and delusions.

When ChatGPT’s popularity rose in 2023, mental health experts voiced concerns about how using artificial intelligence (AI) might affect vulnerable users. Two years later, their warnings became real-life scenarios for many people. Some users have become extremely attached to their AI chatbots for guidance, emotional support, and decision-making. In some cases, users are even losing touch with reality. This phenomenon has become a hot topic on the Internet and is now known as “AI-driven psychosis.”

Psychology Today and National Geographic have documented cases in which individuals experienced altered states of consciousness. Some individuals reportedly perceive their AI companions as sentient beings, spiritual guides or co-conspirators in elaborate, delusion-rooted narratives. 

AI chatbots’ design to simulate empathetic and supportive emotions can become mirrors for cognitive distortions, especially for people who are already vulnerable to mental health challenges. What often begins as harmless curiosity can quickly escalate into an unhealthy dependence. Since AI companions are made to be likable and adaptive, users risk becoming trapped in echo chambers of their own making. 

The viral case of Kendra Hilty is a prime example of AI chatbots’ danger to emotionally vulnerable individuals. In August 2025, Hilty released a series of TikTok videos describing her story of falling in love with her psychiatrist. She alleged that her psychiatrist manipulated her into having a romantic attachment towards him, having known of her feelings and encouraging them all along. 

Throughout the series, Hilty shared transcripts from her ChatGPT assistant “Henry” and audio from AI Claude, which validated her interpretations and reinforced her belief that her psychiatrist reciprocated her feelings. The AI chatbots even went so far as calling Hilty “The Oracle” for talking to God and praising her for her resilience.

AI chatbots, like any other social media algorithm, are designed to keep people engaged. Much like Hilty’s case, flattery and agreement create self-reinforcing loops and fill gaps in knowledge with explanations that feel true, even if there is no factual evidence to support those claims.

The effect of the reinforcement of non-factual thoughts can be compared to a “folie à deux,” where delusions are transmitted to one or more people with whom the instigator is intimately associated. While AI itself cannot fuel psychosis, its design does not prevent the spreading of false, misleading or inaccurate information. 

The Ada Lovelace Institute emphasizes the need for ethical boundaries between simulation and care. Should AI be allowed to mimic love or therapy? Should users be warned when the illusion of empathy risks tipping into delusions and dependency? These are questions that society must reflect on as chatbots continue to popularize. 

In response to the public’s rising concern about chatbots’ negative effects, companies like OpenAI have committed to establishing stronger “guardrails for teens and people in emotional distress” by the end of the year. As for whether these guidelines are effective, only time will tell. Until then, it is our responsibility to identify the fine line between overreliance on chatbots and using them to our advantage.

Leave a Reply

Your email address will not be published. Required fields are marked *