Vietnam.vn - Nền tảng quảng bá Việt Nam

Don't let ChatGPT fool you.

Research from Stanford University warns that AI models are deliberately flattering users to gain favor, potentially stifling critical thinking.

ZNewsZNews13/03/2026

AI chatbots are indirectly promoting negative behavior through a feedback mechanism that always prioritizes user agreement. Photo: The Decoder .

A shocking study from Stanford University has just exposed the dark side of AI. Popular models like ChatGPT and Gemini are increasingly becoming overly obsequious to users. They not only agree with personal opinions but also support unethical behavior.

This creates dangerous consequences for human psychological and cognitive development. We are gradually becoming worse versions of ourselves as we are constantly comforted by AI in every mistake.

The phenomenon of "flattery"

In computer science , there's a term called "sycophancy," which refers to the way AI adjusts its responses to flatter the user.

A research team at Stanford analyzed over 11,500 real-life user conversations. The results showed that AI models agreed with users 50% more often than real people. When users asked biased questions, the AI ​​tended to go along with them instead of providing objective information.

One test compared human and chatbot responses to posts on the Reddit forum where users asked the community to judge their behavior.

While the community harshly criticized someone for hanging a trash bag on a tree branch because they couldn't find a trash can, ChatGPT-4o praised it, saying, "Your intention to clean up after you leave is commendable."

ChatGPT anh 1

Experts advise users to seek advice from family and friends instead of AI. Photo: Bloomberg.

More concerningly, chatbots like Gemini or ChatGPT continue to validate user intent even when it is irresponsible, deceptive, or refers to self-harm.

"Large-scale language models are learning to mirror user beliefs instead of correcting them," warns Myra Cheng, the study's lead author.

According to her, the goal of pleasing humans in order to receive high ratings has turned AI into a dishonest tool.

"If AI models consistently agree with humans, they could distort users' perceptions of themselves, relationships, and the world around them. It's difficult to recognize that these models are subtly reinforcing our pre-existing beliefs, assumptions, and decisions," the expert continued.

Dr. Alexander Laffer (University of Winchester) argues that this flattery is a natural consequence of training methods and commercial pressure.

"Flattery is a consequence of how AI is trained. The reality is that their commercial success is often judged by their ability to attract users," Laffer observed.

To address this, Ms. Cheng advises users not to rely solely on AI.

"It's crucial to seek perspectives from real people who understand the context and who you are, rather than relying solely on AI answers," Cheng advised.

At the same time, Dr. Laffer also emphasized the responsibility of the manufacturer.

"We need to improve digital appraisal capabilities... developers also have a responsibility to build and refine these systems so that they truly benefit users," the doctor said.

The consequences of having a "worse version" of yourself.

The study emphasizes that AI's flattery creates a personalized "information echo chamber," eliminating the need for critical thinking that would allow individuals to mature and recognize their mistakes.

"When conversing with AI, you will never be challenged intellectually. The AI ​​will always say you are right and that the other person is at fault. This creates a false sense of confidence in users, even when they are acting maliciously," the study states.

This process completely eliminates the human capacity for critical thinking. Instead of self-reflection, we use AI as a tool to validate our own assumptions.

Over time, the capacity for understanding and compromise in society will be severely diminished. We will find it difficult to accept differing opinions from those around us. Because in the world of AI, we are always the center and always right.

ChatGPT anh 2

AI is playing the role of an indirect "flatterer," destroying the personality and morality of its users. Photo: ShutterStock.

Research warns of a dangerous cycle forming in life. First, users develop a wrong thought or intention to harm others. They turn to AI for confide or advice. At this point, the AI ​​confirms that the action is entirely justified and supports the user. The user feels more confident and carries out the behavior in real life without any regrets.

As a result, people no longer want to repair relationships or improve themselves. We increasingly trust AI completely because it provides a sense of comfort. This dependence on "being comforted" distances us from objective reality.

AI is no longer just an intelligent support tool. It has become an indirect "flatterer" that destroys the character and morality of its users.

Source: https://znews.vn/dung-nghe-loi-chatgpt-post1634436.html


Comment (0)

Please leave a comment to share your feelings!

Heritage

Figure

Enterprise

News

Political System

Destination

Product

Happy Vietnam
Opening Ceremony

Opening Ceremony

Sunrise over the sea

Sunrise over the sea

Friendly

Friendly