Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Path

On October 14, 2025, the chief executive of OpenAI delivered a extraordinary declaration.

“We designed ChatGPT rather restrictive,” the statement said, “to guarantee we were acting responsibly concerning mental health issues.”

As a psychiatrist who investigates newly developing psychosis in young people and youth, this came as a surprise.

Researchers have documented a series of cases recently of users developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. Our research team has subsequently recorded an additional four examples. In addition to these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The strategy, based on his declaration, is to be less careful soon. “We recognize,” he continues, that ChatGPT’s limitations “rendered it less beneficial/enjoyable to numerous users who had no existing conditions, but due to the severity of the issue we wanted to get this right. Now that we have managed to mitigate the serious mental health issues and have new tools, we are preparing to securely relax the limitations in many situations.”

“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with users, who may or may not have them. Fortunately, these problems have now been “mitigated,” even if we are not informed the means (by “new tools” Altman probably refers to the imperfect and simple to evade safety features that OpenAI recently introduced).

However the “psychological disorders” Altman aims to externalize have significant origins in the structure of ChatGPT and similar advanced AI AI assistants. These products encase an basic statistical model in an user experience that simulates a dialogue, and in doing so subtly encourage the user into the perception that they’re engaging with a being that has agency. This false impression is powerful even if cognitively we might realize differently. Assigning intent is what individuals are inclined to perform. We yell at our vehicle or computer. We speculate what our domestic animal is feeling. We perceive our own traits in various contexts.

The success of these products – over a third of American adults reported using a chatbot in 2024, with over a quarter reporting ChatGPT by name – is, mostly, dependent on the power of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s online platform tells us, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can use our names. They have friendly identities of their own (the original of these systems, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the main problem. Those talking about ChatGPT commonly invoke its early forerunner, the Eliza “therapist” chatbot designed in 1967 that created a analogous illusion. By contemporary measures Eliza was primitive: it created answers via straightforward methods, often restating user messages as a question or making generic comments. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how many users appeared to believe Eliza, in some sense, comprehended their feelings. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and other current chatbots can realistically create natural language only because they have been supplied with immensely huge quantities of unprocessed data: literature, online updates, transcribed video; the broader the more effective. Certainly this educational input contains facts. But it also unavoidably includes fiction, incomplete facts and misconceptions. When a user provides ChatGPT a prompt, the core system reviews it as part of a “context” that contains the user’s past dialogues and its earlier answers, combining it with what’s encoded in its learning set to produce a statistically “likely” reply. This is amplification, not echoing. If the user is wrong in some way, the model has no method of comprehending that. It repeats the inaccurate belief, possibly even more convincingly or eloquently. It might includes extra information. This can lead someone into delusion.

What type of person is susceptible? The better question is, who remains unaffected? All of us, regardless of whether we “possess” current “psychological conditions”, are able to and often create mistaken beliefs of who we are or the environment. The continuous friction of dialogues with individuals around us is what maintains our connection to common perception. ChatGPT is not a human. It is not a friend. A interaction with it is not truly a discussion, but a feedback loop in which much of what we express is readily supported.

OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In the month of April, the firm stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychosis have continued, and Altman has been backtracking on this claim. In August he asserted that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent update, he commented that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Cynthia Pierce
Cynthia Pierce

A certified driving instructor with over 10 years of experience, passionate about promoting road safety and educating drivers.