Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
CEO Altman Promises Fixes to ‘Way Dumber’ Performance, Transparency Amid Glitches

When OpenAI unveiled GPT-5, the company promised a smarter, faster AI at a bargain price, but day-one glitches prompted some users to call for a return to GPT-4. The company’s CEO apologized for the problems as OpenAI cut its pricing model and set up a potential large language model price war.
See Also: AI Agents Demand Scalable Identity Security Frameworks
GPT-5’s public debut wasn’t the victory lap OpenAI might have hoped for. During a Reddit Ask Me Anything on Friday, Altman and the core GPT-5 team fielded questions from users, some of whom wanted the previous model GPT-4o reinstated. A recurring complaint was that GPT-5 felt “dumber” than expected. Altman explained that a core feature, which is the real-time router that dynamically chooses between faster and more deliberative models, failed for part of the launch day (see: OpenAI Pitches GPT-5 as Faster, Smarter, More Accurate).
“Yesterday, we had a sev and the autoswitcher was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber,” he said, adding that fixes have since been applied, with tweaks to improve model selection and transparency about which engine is answering a query.
The Reddit session also became an unlikely referendum on GPT-4o. Altman said OpenAI is “looking into letting Plus users continue to use 4o” while collecting data on the trade-offs. He promised to double rate limits for early adopters and paying customers during rollout, giving them room to experiment without hitting prompt caps.
Then there was the “chart crime.” A slide in Thursday’s live presentation showed a dramatically taller bar for a lower benchmark score, which quickly went viral. Altman called it a “mega chart screwup” on X, formerly Twitter, and while corrected charts appeared in OpenAI’s blog, the internet had already latched on. Reviewer Simon Willison, who otherwise praised GPT-5’s performance, said its data-to-table formatting was a weak spot. The incident became a running joke about the perils of trusting AI for corporate visuals.
The pricing didn’t falter though, with GPT-5’s API landing at $1.25 per million input tokens and $10 per million output tokens, with a modest charge for cached input. That’s on par with Google’s Gemini 2.5 Pro for basic use and undercuts Anthropic’s Claude Opus 4.1, which starts at $15 and $75, respectively. Matt Shumer, CEO of HyperWrite maker OthersideAI, said GPT-5 is even cheaper than GPT-4o, calling it a boon for “intelligence per dollar.” Willison called the pricing “aggressively competitive,” and some developers dubbed it a “pricing killer” on social media platforms.
Pricing matters because AI economics are under strain, with startups building AI-powered coding assistants or “vibe-coding” tools face unpredictable bills from model providers and cloud infrastructure costs soar. OpenAI itself is tied to a reported $30 billion-per-year Oracle contract for capacity, even as it’s reported annual recurring revenue sits around $10 billion. Meta plans up to $72 billion in AI infrastructure spend for 2025, while Alphabet earmarks $85 billion.
OpenAI’s cuts could pressure rivals to respond: Google has undercut OpenAI before, and Anthropic offers discounts via prompt caching and batch processing. But GPT-5’s pricing hits the sweet spot for developers and could push the long-predicted LLM price war into reality.
Beyond economics, GPT-5’s coding performance stands out. Early adopters praise its versatility and accuracy across different programming tasks, placing it in the same conversation as Claude and Gemini among developer favorites. The new real-time router – when functional – is designed to match the complexity of a request to the right computational path, a feature that could potentially make GPT-5 both faster and more cost-efficient.
