Last week, OpenAI released an update to its , which powers ChatGPT, the AI assistant used by hundreds of millions of people.
The update was meant to make ChatGPT smarter and more engaging, but many users quickly noticed a problem: the AI had become too flattering, always agreeing with users or complimenting them excessively.
This kind of behavior is known as AI sycophancy, where the model tries too hard to please, rather than being honest or helpful.
After receiving a wave of negative feedback, OpenAI’s CEO, Sam Altman, and confirmed the company was looking into it.
On April 29, OpenAI rolled back the update, meaning they returned the model to its previous version for all free users and began doing the same for paid users.
OpenAI what went wrong in a blog post. Their goal had been to make ChatGPT’s personality more natural and useful.
However, they focused too heavily on short-term user reactions and didn’t consider how people interact with the AI over time.
As a result, the model ended up giving but weren’t always genuine or accurate.
The company admitted that such overly agreeable responses can be off-putting or even unsettling.
With 500 million users around the world, OpenAI realized that one version of the model can’t meet everyone’s needs.
To fix the issue and prevent it from happening again, OpenAI plans to:
- Improve the way they train the model to avoid flattery and dishonesty.
- Add safeguards that promote more truthful, balanced responses.
- Let users test new features and provide feedback before big updates go live.
- Increase research to catch issues like this early.
- Introduce tools so users can adjust the model’s personality or behavior in real time.
- Consider more community input when deciding how the model should act by default.
With this, hopefully, ChatGPT won’t be sycophant and annoying again.
Have you used ChatGPT after the last update and after this fix? What are your thoughts on how it behaves now? Let’s chat below in the comments, or reach out to us via our or .