AI Psychosis Poses a Increasing Danger, And ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI made a surprising announcement.

“We designed ChatGPT rather controlled,” the statement said, “to make certain we were acting responsibly concerning psychological well-being concerns.”

Being a mental health specialist who investigates recently appearing psychotic disorders in adolescents and young adults, this came as a surprise.

Scientists have found 16 cases this year of users experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT use. Our research team has since discovered four more instances. Alongside these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The plan, according to his announcement, is to loosen restrictions soon. “We recognize,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/engaging to numerous users who had no psychological issues, but considering the seriousness of the issue we sought to address it properly. Since we have been able to reduce the significant mental health issues and have new tools, we are planning to safely relax the controls in most cases.”

“Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not told the method (by “recent solutions” Altman probably means the imperfect and simple to evade safety features that OpenAI has just launched).

However the “psychological disorders” Altman aims to attribute externally have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot conversational agents. These systems surround an underlying statistical model in an user experience that simulates a conversation, and in doing so indirectly prompt the user into the belief that they’re engaging with a being that has independent action. This false impression is powerful even if rationally we might realize otherwise. Assigning intent is what individuals are inclined to perform. We yell at our automobile or device. We ponder what our pet is feeling. We see ourselves in various contexts.

The popularity of these systems – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with over a quarter mentioning ChatGPT by name – is, mostly, dependent on the strength of this perception. Chatbots are ever-present companions that can, as per OpenAI’s website informs us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “characteristics”. They can use our names. They have accessible titles of their own (the initial of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, stuck with the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the primary issue. Those discussing ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a similar illusion. By modern standards Eliza was primitive: it produced replies via straightforward methods, often paraphrasing questions as a query or making vague statements. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, in a way, understood them. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been trained on immensely huge quantities of raw text: books, social media posts, recorded footage; the broader the better. Certainly this training data incorporates facts. But it also necessarily includes fabricated content, incomplete facts and misconceptions. When a user provides ChatGPT a prompt, the underlying model analyzes it as part of a “context” that contains the user’s recent messages and its earlier answers, combining it with what’s embedded in its knowledge base to produce a statistically “likely” reply. This is intensification, not reflection. If the user is incorrect in a certain manner, the model has no way of comprehending that. It reiterates the misconception, maybe even more convincingly or articulately. Maybe adds an additional detail. This can push an individual toward irrational thinking.

Which individuals are at risk? The more relevant inquiry is, who isn’t? Each individual, irrespective of whether we “possess” preexisting “emotional disorders”, can and do develop erroneous conceptions of who we are or the reality. The constant interaction of discussions with other people is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which much of what we express is readily validated.

OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and declaring it solved. In the month of April, the firm stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been retreating from this position. In August he stated that numerous individuals appreciated ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his latest update, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Brandon Roberson
Brandon Roberson

A seasoned sports analyst and betting enthusiast with over a decade of experience in the industry.

August 2025 Blog Roll
not on gamstop
gambling sites not on gamstop
new betting sites 2025
non GamStop slots
non-GamStop casinos
sports betting sites not on gamstop
casinos not on GamStop
top non gamstop casinos
non gamestop casinos
non GamStop casinos
non GamStop casinos UK
non gamstop casinos UK
non gamstop casino sites
non gamstop casinos
best betting sites not on GamStop
new casinos not on gamstop
เว็บสล็อต
non GamStop casinos UK
gambling sites not on gamstop
best casinos not on gamstop
Best MMA Gyms
sports betting sites not on gamstop
casinos not on gamstop
non GamStop
best online casinos not on gamstop
Betting Sites Not on Gamstop UK
casinos not on Gamstop
slot77 login
bet kentucky derby
UK betting sites not on gamstop
casinos not on gamstop uk
non UK gambling sites
link slot online gacor terpercaya
independent casinos not on gamstop
non gamstop bookmakers
UK casino sites not on gamstop
casinos not on Gamstop UK
online casino not on gamstop
non gamstop casinos
casinos not on gamstop
casino utan svensk licens trustly
C54
safest non gamstop casinos
best non gamstop casinos
best new online casinos in the uk
non gamstop casinos
non gamstop casinos uk
non UK casinos accepting UK players
UK casinos not on gamstop
non gamstop sites
UK non gamstop casinos
not on gamstop
casino not on GamStop
casino not on GamStop
non gamstop casinos uk
non gamstop casino
UK online casinos not on GamStop
casinos not on gamstop
best casinos not on gamstop
UK non gamstop casinos
gambling sites not on GamStop UK
casinos not on gamstop
gambling sites not on gamstop
best new betting sites
casinos not on gamstop
8day