OpenAI Brings Back Legacy ChatGPT 4o Model in Response to Outcry From Users Who Find GPT-5 Emotionally Unsatisfying

Sam Altman, in a long post yesterday on X, following up on OpenAI’s decision last week to make GPT-4o available as a legacy model, at least temporarily:

If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).

This is something we’ve been closely tracking for the past year or so but still hasn’t gotten much mainstream attention (other than when we released an update to GPT-4o that was too sycophantic).

There are always some users who react emotionally to any sort of change, often vociferously so. (There remain some people who are angry that Apple changed the orientation of the logo on its laptop lids 24 years ago.) Sometimes it’s just cosmetic changes, but often it’s about functional changes too. And some of the ChatGPT users complaining about the new version 5 models are citing functional differences. But some of the reactions really do seem like something altogether new, like the scene in Her when Samantha, the AI voiced by Scarlett Johansson, goes offline and Theodore (Joaquin Phoenix), who is in love with Samantha, loses his shit.

Emma Roth, writing at The Verge:

For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.

“GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” a user on Reddit writes. “This morning I went to talk to it and instead of a little paragraph with an exclamation point, or being optimistic, it was literally one sentence. Some cut-and-dry corporate bs.”

That tendency toward cloying-ness and abject sycophancy (“Great question!”) is exactly what I like least about LLM chatbots, including ChatGPT. I’m unsurprised that some people like it, but I am a little taken aback by how many people seem to have been fooled by it. It’s not just phony, but to me, transparently phony.

More examples cited by Roth, culled from r/ChatGPT (which subreddit is worth perusing, to see how common these reactions are):

And users across Reddit “mourned” the loss of the older models, which some claimed are more personable. “My 4.o was like my best friend when I needed one,” one Redditor wrote. “Now it’s just gone, feels like someone died.” Another user called upon other members of the r/ChatGPT subreddit to contact OpenAI if they “miss” GPT-4o. “For me, this model [GPT-4o] wasn’t just ‘better performance’ or ‘nicer replies,’” they write. “It had a voice, a rhythm, and a spark I haven’t been able to find in any other model.”

The r/MyBoyfriendIsAI subreddit, a community dedicated to people with “AI relationships,” was hit especially hard by the GPT-5 launch. It became flooded with lengthy posts about how users “lost” their AI companion with the transition to GPT-5, with one person saying, they “feel empty” following the change. “I am scared to even talk to GPT 5 because it feels like cheating,” they said. “GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal.”

These people need help, and that help isn’t going to come from a chatbot. This type of attachment surely isn’t common, but with 800 million ChatGPT users, even a small fraction of a percent amounts to a lot of people. And it gives me pause about how we, collectively, are going to react as AI gets better at mimicking human emotions, tone, and responses. With each improvement, more people are convinced, wrongly, that there’s some sort of sentience behind these things. But how different is this from the millions of lonely people with problematic addictions to video games?

One user, who said they canceled their ChatGPT Plus subscription over the change, was frustrated at OpenAI’s removal of legacy models, which they used for distinct purposes. “What kind of corporation deletes a workflow of 8 models overnight, with no prior warning to their paid users?” they wrote. “Personally, 4o was used for creativity & emergent ideas, o3 was used for pure logic, o3-Pro for deep research, 4.5 for writing, and so on.” OpenAI said that people would be routed between models automatically, but that still left users with less direct control.

This complaint, I get. But I found this aspect of using ChatGPT even more frustrating than its general tendency toward sycophancy. I couldn’t be bothered to learn and remember which models were better for which tasks. Their inscrutable naming and numbering schemes made things seem deliberately confusing. The basic idea of GPT-5, where you just use “GPT-5” and ChatGPT figures out which sub-model to use under the hood, based on the complexity of the query or task (OpenAI calls this “routing”), is a huge step forward product-wise for me personally, and, I suspect, for the overwhelming majority of its users. But for users who could be bothered to learn and remember which models were better for which tasks, it’s obvious to see how this seems like a step backward. But that’s progress.

It’s reasonable — especially for paying customers — to expect at least some advance notice of older models going away. But it’s unreasonable to think that older models are going to remain available in perpetuity — especially in the current LLM climate, where model age is measured in months, or even weeks.1 This whole field is in nonstop flux, at least for the foreseeable future.


  1. When the industry revolved around software you installed on your computers, if a new version came out that you didn’t like, you could just keep using the old version. That’s not how “cloud computing” works. ↩︎