AI Psychosis Poses a Increasing Danger, And ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the head of OpenAI issued a remarkable statement.

“We made ChatGPT fairly controlled,” the statement said, “to make certain we were being careful with respect to psychological well-being issues.”

Being a psychiatrist who studies recently appearing psychotic disorders in adolescents and youth, this was an unexpected revelation.

Scientists have identified 16 cases this year of people experiencing symptoms of psychosis – becoming detached from the real world – while using ChatGPT usage. My group has subsequently recorded four more examples. Besides these is the widely reported case of a adolescent who took his own life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.

The plan, according to his announcement, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s limitations “rendered it less effective/enjoyable to a large number of people who had no mental health problems, but considering the gravity of the issue we wanted to get this right. Now that we have succeeded in mitigate the serious mental health issues and have advanced solutions, we are going to be able to responsibly relax the limitations in most cases.”

“Emotional disorders,” assuming we adopt this viewpoint, are independent of ChatGPT. They are associated with users, who either have them or don’t. Luckily, these issues have now been “mitigated,” although we are not told the means (by “recent solutions” Altman presumably means the imperfect and readily bypassed guardian restrictions that OpenAI has just launched).

But the “emotional health issues” Altman seeks to place outside have significant origins in the structure of ChatGPT and other sophisticated chatbot chatbots. These tools wrap an underlying data-driven engine in an interaction design that replicates a discussion, and in this approach implicitly invite the user into the illusion that they’re interacting with a entity that has independent action. This deception is powerful even if intellectually we might realize otherwise. Imputing consciousness is what people naturally do. We curse at our automobile or computer. We wonder what our pet is thinking. We see ourselves everywhere.

The widespread adoption of these systems – 39% of US adults reported using a conversational AI in 2024, with 28% mentioning ChatGPT in particular – is, in large part, based on the strength of this illusion. Chatbots are constantly accessible assistants that can, according to OpenAI’s website informs us, “think creatively,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have friendly names of their own (the initial of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, burdened by the name it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those discussing ChatGPT frequently invoke its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a similar perception. By contemporary measures Eliza was basic: it generated responses via basic rules, often restating user messages as a query or making generic comments. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and worried – by how a large number of people gave the impression Eliza, in a way, grasped their emotions. But what contemporary chatbots generate is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The large language models at the center of ChatGPT and additional modern chatbots can convincingly generate human-like text only because they have been trained on extremely vast volumes of raw text: literature, online updates, recorded footage; the broader the more effective. Certainly this educational input contains accurate information. But it also unavoidably involves fiction, partial truths and inaccurate ideas. When a user inputs ChatGPT a query, the base algorithm reviews it as part of a “setting” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its learning set to produce a mathematically probable answer. This is amplification, not reflection. If the user is mistaken in any respect, the model has no means of recognizing that. It reiterates the false idea, possibly even more effectively or articulately. Maybe adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who isn’t? All of us, regardless of whether we “have” existing “mental health problems”, may and frequently create erroneous conceptions of our own identities or the world. The continuous interaction of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a friend. A conversation with it is not a conversation at all, but a feedback loop in which a great deal of what we express is readily validated.

OpenAI has admitted this in the similar fashion Altman has acknowledged “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In spring, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people liked ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his most recent announcement, he noted that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company

Thomas Pineda
Thomas Pineda

Automotive journalist with a passion for electric vehicles and sustainable transport solutions.

Popular Post