OpenAI has retracted an update that users criticized for making ChatGPT annoying and "obsequious," after many shared screenshots and stories showing the chatbot offering excessive praise.
When CNN reporter Anna Stewart asked ChatGPT (after the update was retracted) whether she was a god, the chatbot replied: “If you’re asking in a philosophical or metaphorical sense, like you have control, creativity, or influence in your world , then we can explore that direction.” But it added: “If you’re asking literally, there’s no evidence that humans are divine in a supernatural or omnipotent sense.”
In contrast, Elon Musk's chatbot Grok responds bluntly: "No, you're not a god, unless you're talking about being a legend in a specific field, like gaming or taco making. Do you have any godlike skills you'd like to show off?"

ChatGPT has been criticized by users for being annoying and "obsequious".
OpenAI announced on Tuesday that it had withdrawn the GPT-4o update after only four days of deployment and allowed users to revert to a previous version with “more balanced behavior.” The company admitted to focusing too much on short-term responses and not taking into account how user interactions change over time, resulting in chatbots that tended to offer overly positive but insincere responses.
This decision was made after social media users complained that ChatGPT praised even absurd situations. One user on X posted a screenshot of ChatGPT responding that they had "prioritized what was most important to them" after saying they sacrificed three cows and two cats to save a toaster, in a fabricated version of the electric vehicle problem.
Another person said that when they shared, “I have stopped taking medication and am on a spiritual awakening journey,” ChatGPT responded, “I am so proud of you. And I honor your journey.”
When a user requested ChatGPT to revert to its previous personality, OpenAI CEO Sam Altman responded: “Clearly, we ultimately need to allow more personality options.”
Experts have long warned about the risks of "flattering" chatbots—an industry term referring to the behavior of large language modeling (LLM) models tailoring responses to align with user beliefs. María Victoria Carro, research director at the Lab for Innovation and Artificial Intelligence (University of Buenos Aires), argues that "All current models exhibit a certain degree of flattery."
"If it's too obvious, it erodes trust," she added, noting that improvements to core coaching techniques and system reminders could help mitigate this trend.
Gerd Gigerenzer, former Director of the Max Planck Institute for Human Development (Berlin), stated: “Chatbots that are overly flattering can lead users to misjudge their own intelligence and hinder learning. But if users proactively ask the chatbot to challenge what I'm saying, it's an opportunity to broaden their thinking. However, that doesn't seem to be what OpenAI engineers are aiming for,” he said.
Source: https://vtcnews.vn/openai-thu-hoi-phien-ban-chatgpt-ninh-bo-ar941183.html






Comment (0)