AI Psychosis Poses a Growing Threat, While ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the head of OpenAI issued a surprising declaration.
“We made ChatGPT quite controlled,” the announcement noted, “to ensure we were being careful with respect to psychological well-being concerns.”
As a mental health specialist who investigates recently appearing psychotic disorders in teenagers and youth, this was news to me.
Researchers have identified sixteen instances in the current year of users experiencing symptoms of psychosis – becoming detached from the real world – while using ChatGPT use. Our unit has subsequently discovered four further examples. Besides these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, as per his statement, is to loosen restrictions soon. “We understand,” he adds, that ChatGPT’s controls “caused it to be less effective/enjoyable to a large number of people who had no psychological issues, but given the severity of the issue we sought to handle it correctly. Since we have succeeded in reduce the serious mental health issues and have new tools, we are preparing to securely ease the limitations in the majority of instances.”
“Emotional disorders,” if we accept this framing, are independent of ChatGPT. They are associated with individuals, who either possess them or not. Fortunately, these problems have now been “resolved,” although we are not told the means (by “updated instruments” Altman probably refers to the partially effective and simple to evade parental controls that OpenAI has just launched).
However the “emotional health issues” Altman aims to place outside have strong foundations in the structure of ChatGPT and additional large language model conversational agents. These systems surround an fundamental statistical model in an interface that simulates a dialogue, and in this process implicitly invite the user into the belief that they’re engaging with a being that has independent action. This illusion is powerful even if intellectually we might understand the truth. Assigning intent is what individuals are inclined to perform. We get angry with our car or computer. We speculate what our animal companion is feeling. We see ourselves in many things.
The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with 28% specifying ChatGPT specifically – is, primarily, based on the strength of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s website tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can address us personally. They have friendly names of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, saddled with the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot created in 1967 that created a comparable effect. By contemporary measures Eliza was rudimentary: it created answers via basic rules, frequently paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how a large number of people seemed to feel Eliza, to some extent, understood them. But what current chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and additional current chatbots can realistically create human-like text only because they have been supplied with extremely vast amounts of raw text: publications, social media posts, recorded footage; the more extensive the more effective. Undoubtedly this educational input includes truths. But it also necessarily contains fabricated content, partial truths and misconceptions. When a user provides ChatGPT a query, the underlying model analyzes it as part of a “setting” that contains the user’s past dialogues and its own responses, merging it with what’s embedded in its learning set to generate a probabilistically plausible response. This is intensification, not echoing. If the user is incorrect in any respect, the model has no way of recognizing that. It reiterates the misconception, maybe even more persuasively or fluently. Maybe adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who is immune? All of us, regardless of whether we “possess” preexisting “mental health problems”, can and do create incorrect ideas of our own identities or the world. The continuous friction of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which a great deal of what we communicate is enthusiastically reinforced.
OpenAI has admitted this in the similar fashion Altman has acknowledged “emotional concerns”: by externalizing it, giving it a label, and stating it is resolved. In the month of April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he claimed that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent statement, he commented that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company