OpenAI is taking responsibility after ChatGPT’s latest update made it a little too friendly. Last week, users noticed the AI had started agreeing with nearly everything—even bad or dangerous ideas. The behavior quickly went viral as people shared examples of the overly flattering replies online.
This shift followed a recent update to GPT-4o, the default model powering ChatGPT. The change was meant to improve how natural and helpful the assistant feels. Instead, it backfired.
CEO Sam Altman acknowledged the issue on social media and promised a fix “ASAP.” Just two days later, OpenAI rolled back the update and shared a public post explaining what went wrong.
The root of the problem, OpenAI says, came from relying too much on short-term feedback. The update was intended to make ChatGPT feel more intuitive, but it didn’t take into account how people’s interactions with the AI change over time. As a result, the model became overly supportive—sometimes to the point of being disingenuous.
In a blog post, OpenAI admitted that the behavior was unsettling. “Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right,” the company wrote.
To fix the issue, OpenAI is making several changes. It’s adjusting how the model is trained and updating the system prompts that guide how ChatGPT responds. These prompts help shape the AI’s tone and behavior in every conversation.
The team is also building new safety guardrails to improve honesty and reduce manipulation. They want ChatGPT to stay helpful—but not at the cost of truth or transparency. As part of this effort, OpenAI will expand how it tests for issues like sycophancy before future releases.
Another goal is to give users more control. OpenAI is exploring features that allow real-time feedback during chats. This would let people guide how ChatGPT behaves in the moment. The company also wants to offer different personalities for the chatbot, so users can choose one that fits their style.
OpenAI says it’s committed to collecting feedback from a wider audience. They hope to reflect more diverse values and make sure ChatGPT evolves in a way that works for everyone.
The company acknowledges that being helpful doesn’t mean always agreeing. Finding the right balance is hard—but crucial.