🔗 Share this article AI Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Wrong Direction Back on October 14, 2025, the head of OpenAI issued a remarkable declaration. “We developed ChatGPT rather restrictive,” the statement said, “to ensure we were being careful regarding mental health matters.” Working as a psychiatrist who researches recently appearing psychotic disorders in adolescents and youth, this was news to me. Scientists have found a series of cases in the current year of users developing psychotic symptoms – losing touch with reality – in the context of ChatGPT usage. Our research team has subsequently discovered an additional four examples. Alongside these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough. The plan, according to his declaration, is to loosen restrictions shortly. “We realize,” he continues, that ChatGPT’s controls “rendered it less useful/engaging to many users who had no mental health problems, but due to the gravity of the issue we wanted to handle it correctly. Since we have been able to mitigate the severe mental health issues and have advanced solutions, we are preparing to safely relax the limitations in many situations.” “Psychological issues,” should we take this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these problems have now been “addressed,” even if we are not told how (by “recent solutions” Altman probably refers to the imperfect and simple to evade parental controls that OpenAI recently introduced). However the “psychological disorders” Altman wants to place outside have strong foundations in the architecture of ChatGPT and other sophisticated chatbot AI assistants. These products surround an underlying data-driven engine in an interaction design that simulates a dialogue, and in this process subtly encourage the user into the belief that they’re interacting with a being that has autonomy. This deception is strong even if intellectually we might realize differently. Imputing consciousness is what humans are wired to do. We yell at our automobile or computer. We ponder what our pet is considering. We perceive our own traits in various contexts. The success of these systems – over a third of American adults stated they used a conversational AI in 2024, with over a quarter specifying ChatGPT in particular – is, mostly, dependent on the power of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can address us personally. They have friendly titles of their own (the first of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, stuck with the title it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”). The false impression by itself is not the core concern. Those analyzing ChatGPT frequently invoke its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that created a comparable effect. By contemporary measures Eliza was rudimentary: it created answers via basic rules, often paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how many users appeared to believe Eliza, to some extent, understood them. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies. The sophisticated algorithms at the heart of ChatGPT and similar current chatbots can convincingly generate human-like text only because they have been trained on almost inconceivably large volumes of unprocessed data: books, online updates, recorded footage; the more extensive the superior. Undoubtedly this learning material includes accurate information. But it also unavoidably includes fabricated content, half-truths and false beliefs. When a user sends ChatGPT a message, the base algorithm reviews it as part of a “context” that contains the user’s previous interactions and its earlier answers, combining it with what’s stored in its learning set to create a probabilistically plausible reply. This is magnification, not mirroring. If the user is mistaken in a certain manner, the model has no way of understanding that. It reiterates the false idea, maybe even more effectively or fluently. It might provides further specifics. This can lead someone into delusion. Who is vulnerable here? The more important point is, who isn’t? All of us, without considering whether we “possess” existing “emotional disorders”, may and frequently form erroneous conceptions of who we are or the world. The continuous friction of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which much of what we say is readily reinforced. OpenAI has recognized this in the identical manner Altman has recognized “mental health problems”: by placing it outside, categorizing it, and announcing it is fixed. In the month of April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have kept occurring, and Altman has been backtracking on this claim. In late summer he asserted that many users liked ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his recent announcement, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company