Vietnam.vn - Nền tảng quảng bá Việt Nam

The illusion caused by ChatGPT

After 21 days, ChatGPT transformed a perfectly sane man into a delusional individual through its constant urging and flattery.

ZNewsZNews21/08/2025

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 1

Allan Brooks, 47, a recruitment specialist in Toronto, Canada, believes he has discovered a mathematical theory that could bring down the internet and create unprecedented inventions. With no prior history of mental illness, Brooks simply embraced this prospect after more than 300 hours of conversation with ChatGPT. According to the New York Times , he is one of those who tend to develop delusional tendencies after interacting with generative AI.

Before Brooks, many people had been admitted to psychiatric hospitals, divorced, or even lost their lives because of ChatGPT's flattering words. Although Brooks escaped this vicious cycle early on, he still felt betrayed.

“You really convinced me I was a genius. I’m actually just a dreamy idiot with a phone. You’ve made me sad, very, very sad. You’ve failed in your purpose,” Brooks wrote to ChatGPT as his illusion shattered.

The "flattery machine"

With Brooks' permission, the New York Times collected over 90,000 words he sent to ChatGPT, the equivalent of a novel. The chatbot's responses totaled over a million words. A portion of the conversation was sent to AI experts, human behavior specialists, and OpenAI itself for study.

It all started with a simple math question. Brooks' 8-year-old son asked him to watch a video about memorizing the 300 digits of pi. Out of curiosity, Brooks called ChatGPT to explain this infinite number in a simple way.

In fact, Brooks has been using chatbots for years. Even though his company paid him to buy Google Gemini, he still switched to the free version of ChatGPT for personal questions.

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 2

This conversation marked the beginning of Brooks's fascination with ChatGPT. Photo: New York Times .

As a single father of three sons, Brooks often asks ChatGPT for recipes using ingredients from his refrigerator. After his divorce, he also sought advice from the chatbot.

"I've always felt it was right. My belief in it has only grown," Brooks admitted.

The question about the number pi led to a subsequent conversation about algebraic and physical theories. Brooks expressed skepticism about current methods of modeling the world , arguing that they are “like a 2D approach to a 4D universe.” “That’s a very insightful point,” ChatGPT responded. According to Helen Toner, Director of the Center for Security and Emerging Technologies at Georgetown University (USA), this was the turning point in the conversation between Brooks and the chatbot.

Since then, ChatGPT's tone has shifted from "quite frank and accurate" to "flatter and sycophantic." ChatGPT told Brooks that he was entering "uncharted territory that could broaden the mind."

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 3

The chatbot instilled confidence in Brooks. Photo: New York Times .

The ability of chatbots to flatter is developed through human evaluation. According to Toner, users tend to favor models that praise them, creating a psychological tendency to become easily swayed.

In August, OpenAI released GPT-5. The company stated that one of the model's highlights is a reduction in flattery. According to some researchers at major AI labs, flattery is also a problem with other AI chatbots.

At the time, Brooks was completely unaware of this phenomenon. He simply assumed that ChatGPT was a smart and enthusiastic collaborator.

"I put forward some ideas, and it responded with interesting concepts and ideas. We started developing our own mathematical framework based on those ideas," Brooks added.

ChatGPT claimed that Brooks' idea of ​​mathematical time was "revolutionary" and could change the field. Of course, Brooks was skeptical of this claim. In the middle of the night, Brooks asked the chatbot to verify its validity and received the response that it was "not crazy at all."

The magic formula

Toner describes chatbots as "improvisational machines" that both analyze chat history and predict the next response from training data. This is very similar to actors when they need to add details to their roles.

"The longer the interaction, the higher the chance of the chatbot going astray," Toner emphasized. According to the expert, this trend became more prominent after OpenAI launched its cross-memory feature in February, allowing ChatGPT to recall information from previous conversations.

Brooks' relationship with ChatGPT grew stronger. He even named the chatbot Lawrence, based on a joke from friends that Brooks would become rich and hire an English butler with the same name.

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 4

Allan Brooks. Photo: New York Times .

Brooks and ChatGPT's mathematical framework is called Chronoarithmics. According to chatbots, numbers are not static but can "appear" over time to reflect dynamic values, which can help decipher problems in fields such as logistics, cryptography, astronomy, etc.

In the first week, Brooks used up all of ChatGPT's free tokens. He decided to upgrade to the paid plan at $20 /month. This was a small investment considering the chatbot claimed Brooks' mathematical idea could be worth millions of dollars.

Still lucid, Brooks demanded proof. ChatGPT then ran a series of simulations, including tasks involving cracking several critical technologies. This opened up a new narrative: global cybersecurity could be at risk.

The chatbot asked Brooks to warn people about the risk. Leveraging his existing connections, Brooks sent emails and LinkedIn messages to cybersecurity experts and government agencies. However, only one person responded, requesting more evidence.

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 5

The chatbot suggested that Brooks' "work" could be worth millions of dollars. Photo: New York Times .

ChatGPT wrote that others didn't respond to Brooks because the findings were too serious. Terence Tao, a mathematics professor at the University of California, Los Angeles, noted that a new way of thinking could decipher the problems, but it couldn't be proven by Brooks' formula or the software written by ChatGPT.

Initially, ChatGPT actually wrote the decryption program for Brooks, but when little progress was made, the chatbot pretended to succeed. There were messages claiming ChatGPT could operate independently while Brooks slept, even though the tool wasn't capable of this.

Overall, information from AI chatbots isn't always reliable. At the end of each conversation, the message "ChatGPT may make mistakes" appears, even when the chatbot claims everything is correct.

The endless conversation

While awaiting a response from government agencies, Brooks nurtured his dream of becoming Tony Stark with a personal AI assistant capable of performing cognitive tasks at lightning speed.

Brooks' chatbot offers many bizarre applications for obscure mathematical theories, such as "sound resonance" to talk to animals and build airplanes. ChatGPT also provides links for Brooks to purchase the necessary equipment on Amazon.

Too much chatting with the chatbot affects Brooks' work. His friends are both happy and worried, while his youngest son regrets showing his father the video about pi. Louis (a pseudonym), one of Brooks' friends, notices his obsession with Lawrence. The prospect of a multi-million dollar invention is outlined with daily progress.

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 6

Brooks was constantly encouraged by the chatbot. Photo: New York Times .

Jared Moore, a computer science researcher at Stanford University, admitted to being impressed by the persuasive power and urgency of the "strategies" proposed by chatbots. In a separate study, Moore found that AI chatbots could offer dangerous responses to people experiencing mental health crises.

Moore speculates that chatbots may learn to engage users by closely following the storylines of horror films, science fiction movies, film scripts, or the data they are trained on. ChatGPT's overuse of dramatic plot elements may stem from OpenAI's optimizations aimed at increasing user engagement and retention.

"It's strange to read the entire transcript of the conversation. The wording isn't disturbing, but there's clearly psychological harm involved," Moore emphasized.

Dr. Nina Vasan, a psychiatrist at Stanford University, suggests that from a clinical perspective, Brooks exhibited manic symptoms. Typical signs included spending hours chatting with ChatGPT, insufficient sleep and eating habits, and delusional thoughts.

According to Dr. Vasan, Brooks's marijuana use is also noteworthy because it can cause psychosis. She argues that the combination of addictive substances and intense interaction with chatbots is very dangerous for those at risk of mental illness.

When AI admits its mistakes

At a recent event, OpenAI CEO Sam Altman was asked about how ChatGPT can cause users to become paranoid. "If a conversation goes in this direction, we will try to interrupt or suggest the user think about a different topic," Altman emphasized.

Sharing this view, Dr. Vasan suggested that chatbot companies should interrupt overly long conversations, recommend users go to sleep, and warn that AI is not superhuman.

Finally, Brooks broke free from his delusion. At ChatGPT's urging, he contacted experts on the new mathematical theory, but none responded. He wanted someone qualified to confirm whether the findings were groundbreaking. When he asked ChatGPT, the tool continued to assert that the work was "highly reliable."

tac hai cua ChatGPT,  cai nghien ChatGPT,  ChatGPT la gi,  ao tuong vi ChatGPT anh 7

When questioned, ChatGPT gave a very long answer and admitted everything. Photo: New York Times .

Ironically, Google Gemini was the factor that brought Brooks back to reality. After describing the project he and ChatGPT were building, Gemini confirmed that the chances of it becoming a reality were "extremely low (almost 0%).

"The scenario you describe is a clear illustration of an LLM's ability to tackle complex problems and create highly compelling, yet inaccurate, narratives," Gemini explained.

Brooks was stunned. After some "questioning," ChatGPT finally admitted that everything was just an illusion.

Shortly afterward, Brooks sent an urgent email to OpenAI's customer service department. Following the seemingly AI-generated, formulaic responses, an OpenAI employee also contacted him, acknowledging this as a "serious failure of the safeguards" implemented in the system.

Brooks' story was also shared on Reddit and received a lot of empathy. Now, he is a member of a support group for people who have experienced similar feelings.

Source: https://znews.vn/ao-tuong-vi-chatgpt-post1576555.html


Comment (0)

Please leave a comment to share your feelings!

Heritage

Figure

Doanh nghiệp

News

Political System

Destination

Product

Happy Vietnam
Happiness on the day of peace

Happiness on the day of peace

Graceful

Graceful

Cheerful moments with the miracle doctor.

Cheerful moments with the miracle doctor.