Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Heads in the Concerning Path
On October 14, 2025, the head of OpenAI made a extraordinary announcement.
“We developed ChatGPT fairly limited,” the announcement noted, “to ensure we were being careful regarding psychological well-being matters.”
As a mental health specialist who researches newly developing psychosis in teenagers and young adults, this was an unexpected revelation.
Experts have documented sixteen instances this year of people experiencing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT use. Our unit has afterward identified an additional four instances. Besides these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The intention, according to his announcement, is to be less careful shortly. “We recognize,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/engaging to a large number of people who had no mental health problems, but given the severity of the issue we wanted to address it properly. Now that we have been able to address the severe mental health issues and have advanced solutions, we are preparing to safely reduce the limitations in most cases.”
“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Luckily, these concerns have now been “mitigated,” even if we are not told how (by “new tools” Altman probably means the semi-functional and easily circumvented parental controls that OpenAI has lately rolled out).
Yet the “mental health problems” Altman seeks to externalize have strong foundations in the design of ChatGPT and similar advanced AI conversational agents. These products encase an basic statistical model in an interface that replicates a discussion, and in this process implicitly invite the user into the belief that they’re interacting with a being that has independent action. This false impression is powerful even if intellectually we might understand otherwise. Imputing consciousness is what humans are wired to do. We yell at our vehicle or device. We speculate what our animal companion is feeling. We perceive our own traits everywhere.
The popularity of these products – over a third of American adults reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, mostly, dependent on the strength of this perception. Chatbots are ever-present partners that can, as per OpenAI’s official site tells us, “think creatively,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have accessible titles of their own (the initial of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those talking about ChatGPT commonly reference its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar effect. By modern standards Eliza was primitive: it created answers via straightforward methods, often restating user messages as a inquiry or making generic comments. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, understood them. But what modern chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been fed immensely huge amounts of raw text: literature, digital communications, recorded footage; the more comprehensive the better. Definitely this training data includes truths. But it also necessarily involves fabricated content, partial truths and false beliefs. When a user provides ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that contains the user’s recent messages and its prior replies, combining it with what’s embedded in its training data to produce a probabilistically plausible reply. This is intensification, not reflection. If the user is incorrect in a certain manner, the model has no means of understanding that. It repeats the false idea, possibly even more persuasively or articulately. Perhaps includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The more relevant inquiry is, who remains unaffected? All of us, irrespective of whether we “possess” preexisting “mental health problems”, can and do form erroneous ideas of ourselves or the reality. The constant exchange of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A interaction with it is not a conversation at all, but a echo chamber in which a great deal of what we express is cheerfully supported.
OpenAI has recognized this in the same way Altman has acknowledged “emotional concerns”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the firm clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In the summer month of August he stated that many users liked ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his most recent announcement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company