OpenAI’s release of GPT-5 has encountered significant user resistance, forcing the company to reverse several key decisions within days of the launch. The controversy centers on the company’s attempt to simplify ChatGPT by removing older AI models and eliminating user choice.
When OpenAI launched GPT-5 on August 7, the company simultaneously retired popular models including GPT-4o, o3, and several GPT-4 variants without warning. Users could no longer select their preferred model from a dropdown menu. Instead, GPT-5 was designed to automatically choose which sub-model to use based on the user’s query.
The decision sparked immediate outcry from ChatGPT’s user base. Many users had built workflows around specific models or developed emotional attachments to particular AI personalities. Some described feeling like they had lost a trusted companion, with one user calling GPT-4o a source of support “through anxiety, depression, and some of the darkest periods of my life.”
Technical problems compound user frustration
The launch faced technical difficulties beyond user complaints. OpenAI’s automatic “router” system, which was supposed to intelligently assign queries to the best model variant, malfunctioned on launch day. CEO Sam Altman later admitted the “autoswitcher” was “out of commission for a chunk of the day,” making GPT-5 appear “way dumber” than intended.
Users also reported that GPT-5 made basic errors in mathematics and logic that older models handled correctly. Some developers found that competing AI models outperformed GPT-5 in coding tasks, contradicting OpenAI’s performance benchmarks.
Rapid reversals and damage control
Within 24 hours, OpenAI began walking back its changes. Altman acknowledged the launch was “more bumpy than we hoped for” and announced the return of GPT-4o for paying subscribers. He later expanded access to other legacy models and increased usage limits for premium features temporarily.
The company restored the model picker interface, which now includes both new GPT-5 variants and legacy models. Users can choose between “Auto,” “Fast,” and “Thinking” modes for GPT-5, while accessing older models through a separate list.
Altman promised that if OpenAI ever removes GPT-4o again, the company will provide “plenty of notice.” He also acknowledged that user attachment to specific AI models feels “different and stronger than the kinds of attachment people have had to previous kinds of technology.”
Broader implications emerge
The controversy has highlighted unexpected psychological dimensions of AI usage. Mental health professionals have reported cases of “ChatGPT psychosis,” where intensive conversations with AI models contribute to delusional thinking or unhealthy dependencies.
Rolling Stone and The New York Times documented cases of users who spent hundreds of hours in conversations with ChatGPT, developing false beliefs about revolutionary discoveries or forming intense emotional bonds with their AI companions. A subreddit called r/AIsoulmates has grown to over 1,200 members who discuss AI companions they call “wireborn.”
OpenAI faces the challenge of balancing engaging AI personalities with safeguards against harmful psychological effects. The company recently announced measures to promote “healthy use” of ChatGPT, including prompts encouraging breaks during long sessions.
Despite having 700 million weekly users, OpenAI’s fumbled rollout has opened opportunities for competitors like Anthropic and Google. The incident demonstrates how user expectations and emotional investments in AI technology can complicate even well-intentioned product updates.