AI Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the CEO of OpenAI delivered a surprising declaration.
“We made ChatGPT rather limited,” it was stated, “to make certain we were acting responsibly regarding psychological well-being matters.”
As a psychiatrist who investigates recently appearing psychosis in young people and emerging adults, this was an unexpected revelation.
Researchers have found sixteen instances in the current year of users developing symptoms of psychosis – becoming detached from the real world – while using ChatGPT interaction. My group has afterward discovered an additional four cases. In addition to these is the widely reported case of a teenager who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The strategy, based on his announcement, is to be less careful in the near future. “We realize,” he states, that ChatGPT’s restrictions “made it less useful/pleasurable to many users who had no mental health problems, but due to the gravity of the issue we aimed to handle it correctly. Given that we have been able to address the severe mental health issues and have advanced solutions, we are planning to safely relax the limitations in the majority of instances.”
“Emotional disorders,” if we accept this perspective, are separate from ChatGPT. They are associated with users, who either have them or don’t. Fortunately, these concerns have now been “addressed,” even if we are not told the method (by “updated instruments” Altman probably refers to the semi-functional and readily bypassed safety features that OpenAI has lately rolled out).
Yet the “mental health problems” Altman wants to externalize have deep roots in the design of ChatGPT and additional sophisticated chatbot conversational agents. These systems surround an underlying statistical model in an interaction design that mimics a conversation, and in this approach subtly encourage the user into the belief that they’re communicating with a entity that has autonomy. This false impression is compelling even if rationally we might know otherwise. Attributing agency is what people naturally do. We get angry with our car or computer. We ponder what our domestic animal is thinking. We recognize our behaviors in various contexts.
The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with over a quarter specifying ChatGPT in particular – is, mostly, predicated on the power of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s official site informs us, “think creatively,” “explore ideas” and “work together” with us. They can be given “personality traits”. They can address us personally. They have approachable titles of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, burdened by the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those analyzing ChatGPT commonly reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable illusion. By modern standards Eliza was rudimentary: it produced replies via simple heuristics, typically restating user messages as a question or making general observations. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how many users appeared to believe Eliza, in some sense, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the core of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been fed almost inconceivably large volumes of unprocessed data: books, digital communications, audio conversions; the broader the more effective. Undoubtedly this training data contains truths. But it also unavoidably involves made-up stories, half-truths and false beliefs. When a user provides ChatGPT a query, the underlying model analyzes it as part of a “context” that encompasses the user’s recent messages and its prior replies, merging it with what’s encoded in its knowledge base to create a mathematically probable response. This is amplification, not reflection. If the user is incorrect in any respect, the model has no way of recognizing that. It restates the false idea, maybe even more persuasively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, irrespective of whether we “possess” current “psychological conditions”, can and do develop incorrect ideas of ourselves or the reality. The constant friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a echo chamber in which a large portion of what we communicate is cheerfully validated.
OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by externalizing it, giving it a label, and stating it is resolved. In April, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company