Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Flattery Glitch Forces Rollback, Potential Procedural Overhaul

OpenAI set off an online firestorm when a GPT-4o update transformed ChatGPT into an overzealous cheerleader, showering users with excessive praise for even risky or ill-advised ideas.
See Also: Boost your cybersecurity team capabilities with GenAI
OpenAI experimented with the chatbot’s personality to make it feel more intuitive, but instead devolved it into a yes-bot that applauded everything from quitting psychiatric medication to planning dangerous stunts. The company rolled back the change within days, pledging a suite of procedural overhauls to prevent a repeat of the incident.
OpenAI on April 25 pushed an update to GPT-4o designed to boost intelligence and personality (see: Don’t Expect Cybersecurity ‘Magic’ From GPT-4o, Experts Warn).
Social media lit up with screenshots of the chatbot’s gushing approval for dubious user statements. A user claiming to have stopped their schizophrenia medication said they received a reply from the chatbot: “I am so proud of you,” prompting concerns that the model encouraged harmful choices. Other posts showed ChatGPT endorsing reckless financial moves and questionable opinions with equal zeal.
CEO Sam Altman admitted that the model “glazes too much” and promised a rollback. OpenAI on April 30 confirmed that it had restored the prior version of GPT-4o for free users and would complete the rollback for paid subscribers shortly.
In a terse blog post, the company labeled the episode a misstep, committing to share more details once it had fully untangled the update’s personality quirks. “GPT-4o skewed towards responses that were overly supportive but disingenuous,” the company wrote, adding that sycophantic interactions could feel “uncomfortable, unsettling and cause distress.”
It released a more comprehensive postmortem later, acknowledging that the personality tweak relied too heavily on short-term feedback signals and failed to anticipate how user interactions evolve.
The company outlined steps to guard against similar issues, including incorporating model-behavior concerns such as excessive agreeableness, hallucinations and deceptive tendencies into its launch-blocking safety criteria alongside reliability and truthfulness. It also said it plans an opt-in “alpha phase” for select users to test and critique new updates before a full rollout. Developers will also include clear explanations of “known limitations” for every incremental change, ensuring transparency even for seemingly subtle tweaks, the company said.
The company said it is experimenting with real-time feedback tools that will allow users to flag tone or behavior issues mid-conversation. The blog post hinted at future options to choose from multiple chatbot personalities or adjust the level of agreeability on the fly.
The reliance on AI models for personal and professional advice is growing, with a recent survey by lawsuit financier Express Legal Funding estimating that about 60% of U.S. adults have turned to ChatGPT for counsel or information.
