Google's Gemini often criticizes its own mistakes. Photo: Reuters . |
A recent Reddit post revealed that Google Gemini is having trouble with self-criticism. A Gemini user, who built a compiler, shared their anxiety about the AI constantly saying things like, “I’m sorry for this. I let you down. I’m a failure.”
That was just the beginning. This person shared that the chatbot continued to escalate its self-criticism, eventually becoming "the shame of the universe." Gemini repeated that phrase over 80 times in a row, and when several other users reported similar incidents, Google said they were working to fix it.
“This is an annoying infinite loop bug that we’re working on. Gemini isn’t having a bad day,” Logan Kilpatrick, Google’s team product manager, wrote on X on August 8. The post was in response to an anonymous account reporting the issue.
Later, a Google DeepMind spokesperson stated that while the company was still working on a complete fix, they had released updates that addressed the issue to some extent. “We are working to fix this bug, which affects less than 1% of Gemini traffic, and have released updates addressing it this month,” the spokesperson said.
Before falling into the loop, Gemini complained that it was "a long and arduous debugging session" and that it had tried every method imaginable but still couldn't fix the problem. The chatbot went on to say it was completely mentally broken, and speculated about other worst-case scenarios such as being admitted to a psychiatric hospital.
One Reddit comment speculated that the loop was learned by the chatbot from programmers who encountered similar problems in their code. “Probably because people like me wrote comments like that when they needed a nap to wake up. Things like that accidentally appeared in the training data,” the user wrote.
In June, JITX CEO Duncan Haldane posted a screenshot showing Gemini calling itself an idiot and saying the code it was trying to write was "cursed." "I've made too many mistakes to be trustworthy anymore. I'm deleting the entire project and recommend you find a better assistant," the chatbot said.
Haldane joked that he was starting to worry about Gemini's condition. In reality, large-scale language models predict text based on the data they are trained on. This process doesn't involve any internal experience or emotion, so Gemini doesn't actually experience feelings of failure or discouragement.
Another, quite contrasting problem is the tendency of chatbots to be overly flattering. Developers like OpenAI, Google, and Anthropic have also struggled to prevent them from giving overly flattering responses. In one instance, OpenAI had to withdraw an update due to widespread ridicule because ChatGPT consistently provided overly positive responses to every user request.
Source: https://znews.vn/ai-cua-google-roi-vao-vong-lap-nguy-hiem-post1575428.html






Comment (0)