OpenAI CEO Sam Altman has acknowledged that ChatGPT’s writing quality has declined in recent months, attributing the issue to the way the company trained its latest Artificial Intelligence model. Speaking candidly during OpenAI’s first-ever developer town hall, Altman said the drop in writing performance was an unintended consequence of prioritising math, reasoning, and coding capabilities.
The town hall, held on Tuesday and streamed live on YouTube, brought together AI developers and partners from across the ecosystem. Over the course of the nearly 50-minute session, Altman addressed a wide range of topics, including OpenAI’s go-to-market strategy, future product roadmap, cost-versus-performance challenges, and ChatGPT’s growing role as a scientific and engineering collaborator.
OpenAI Acknowledges Decline in ChatGPT’s Writing Performance
One of the most notable moments from the event came when OpenAI openly admitted that its newest flagship model, GPT-5.2, performs worse in writing tasks compared to its predecessor, GPT-4.5. While GPT-5.2 delivers major improvements in coding, mathematics, and logical reasoning, the company conceded that these gains came at the expense of creative and narrative writing quality.
When questioned by Ben Hylak, co-founder of AI startup Raindrop, about the decline in writing performance, Altman did not mince words. “I think we just screwed that up,” he said. “We plan to make GPT-5.x significantly better at writing than 4.5 over time.”
Altman explained that OpenAI deliberately concentrated most of its limited research bandwidth on making GPT-5.2 exceptionally strong in areas such as intelligence, reasoning, coding, and engineering. “We made a decision—one that made sense at the time—to focus heavily on those capabilities,” he said. “But when you focus intensely on one thing, you sometimes neglect another.”
Why OpenAI Prioritised Coding and Reasoning
From a business perspective, OpenAI’s strategy aligns with its revenue goals. While consumer subscriptions to ChatGPT face pricing limits and sensitivity, enterprise customers are more willing to pay higher fees under token-based pricing models. As a result, capabilities that appeal directly to developers and businesses—such as faster coding, better reasoning, and improved problem-solving—were prioritised during GPT-5.2’s development.
This focus, however, led to a noticeable dip in conversational depth and writing quality. Altman reassured users that this trade-off is temporary, emphasising that future updates will rebalance the model’s strengths and significantly enhance writing performance.
The Jevons Paradox and the Future of AI Jobs
During the discussion, Altman also touched on what he called AI’s version of the Jevons Paradox—the economic theory that increased efficiency lowers costs, which in turn boosts demand and overall consumption. Applied to artificial intelligence, this suggests that cheaper and faster AI-powered coding could actually increase demand for software development rather than reduce it.
According to Altman, greater efficiency in coding may lead to the creation of entirely new roles rather than eliminating existing ones. “I think the engineering profession is going to change a lot,” he said. “Many more people will be able to use computers to achieve what they want, create more value, and capture more value.”
He added that while the nature of engineering work will evolve, tasks such as manual typing and debugging will take up less time. Instead, professionals will focus more on guiding computers, designing solutions, and building experiences that help others achieve their goals.
Overall, OpenAI’s admission marks a rare moment of transparency from the AI giant. While GPT-5.2 may currently lag in writing quality, the company’s leadership has signalled that restoring—and surpassing—previous writing standards is a key priority for upcoming releases.
For breaking news and live news updates, like us on Facebook or follow us on Twitter and Instagram. Read more on Latest Technology on thefoxdaily.com.
COMMENTS 0