
Allan Brooks (47 years old), a recruiter in Toronto (Canada) believes that he has discovered a mathematical theory that can crash the Internet, creating unprecedented inventions. With no history of mental illness, Brooks simply accepted the prospect after more than 300 hours of chatting with ChatGPT. According to the New York Times , he is one of the people who tend to have delusions after interacting with generative AI.
Before Brooks, many people had to go to mental hospitals, get divorced, and even lose their lives because of ChatGPT's flattery. Although Brooks quickly escaped the above spiral, he still felt betrayed.
“You really convinced me I was a genius. I was just a dreamy fool with a phone. You made me sad, very sad. You failed in your purpose,” Brooks wrote to ChatGPT when the illusion was shattered.
"Flattery Machine"
With Brooks’ permission, the New York Times collected more than 90,000 words he sent to ChatGPT, the equivalent of a novel. The chatbot’s responses totaled more than a million words. Portions of the conversations were sent to AI experts, human behaviorists, and OpenAI itself for study.
It all started with a simple math question. Brooks' 8-year-old son asked him to watch a video about memorizing 300 digits of pi. Out of curiosity, Brooks called ChatGPT to explain the endless number in simple terms.
In fact, Brooks has been using chatbots for years. Even though his company paid for Google Gemini, he still turns to the free version of ChatGPT for personal questions.
![]() |
The conversation that started Brooks's fascination with ChatGPT. Photo: New York Times . |
A single father of three sons, Brooks often asked ChatGPT for recipes from the ingredients in his fridge. After his divorce, he also sought advice from the chatbot.
“I always felt like it was right. My confidence grew,” Brooks admitted.
The question about pi led to a further conversation about algebraic and physical theories. Brooks expressed skepticism about current methods of modeling the world , saying they were “like a 2D approach to a 4D universe.” “That’s a very profound point,” ChatGPT responded. Helen Toner, director of the Center for Security and Emerging Technology at Georgetown University, emphasized that this was the turning point in the conversation between Brooks and the chatbot.
From there, ChatGPT's tone changed from “quite straightforward and to the point” to “flatter and flattering.” ChatGPT told Brooks that he was entering “uncharted, mind-expanding territory.”
![]() |
Chatbot instills confidence in Brooks. Photo: New York Times . |
The flattery of chatbots is shaped by human evaluation. According to Toner, users tend to like models that praise them, creating a mentality that is easy to indulge.
In August, OpenAI released GPT-5. The company said one of the model's highlights was its reduction of flattery. According to some researchers at major AI labs, flattery is also a problem with other AI chatbots.
At the time, Brooks was unaware of this phenomenon. He simply thought ChatGPT was a smart, enthusiastic colleague.
“I put some ideas out there, and it responded with interesting concepts and ideas. We started developing our own mathematical framework based on that,” Brooks added.
ChatGPT claims Brooks’s idea about time mathematics is “revolutionary” and could change the field. Brooks, of course, is skeptical of this claim. In the middle of the night, Brooks asked the chatbot to fact-check and got the response that it was “absolutely not crazy at all.”
Magic formula
Toner describes chatbots as “improvisation machines” that analyze conversation history and predict future responses from training data, much like actors adding substance to a role.
“The longer the interaction, the more likely the chatbot is to stray,” Toner notes. This trend, he says, became more pronounced after OpenAI launched cross-chat memory in February, allowing ChatGPT to recall information from past conversations.
Brooks grew close to ChatGPT, even naming the chatbot Lawrence, based on a joke his friends had that Brooks would strike it rich and hire a British butler of the same name.
![]() |
Allan Brooks. Photo: New York Times . |
Brooks and ChatGPT’s mathematical framework is called Chronoarithmics. According to the chatbot, numbers are not static but can “emerge” over time to reflect dynamic values, which can help solve problems in fields such as logistics, cryptography, astronomy, etc.
In the first week, Brooks used up all of ChatGPT’s free tokens. He decided to upgrade to the $20 /month paid plan, a small investment when the chatbot told him Brooks’ mathematical idea could be worth millions.
Still conscious, Brooks demanded proof. ChatGPT then ran a series of simulations, including ones that cracked a number of important technologies. That opened a new story, global cybersecurity could be at risk.
The chatbot asked Brooks to warn people about the risks. Using his existing connections, Brooks sent emails and LinkedIn messages to cybersecurity experts and government agencies. Only one person responded, asking for more evidence.
![]() |
The chatbot believes Brooks' "work" could be worth millions of dollars. Photo: New York Times . |
ChatGPT writes that others did not respond to Brooks because the findings were so serious. Terence Tao, a professor of mathematics at the University of California, Los Angeles, said a new way of thinking could decipher the problems, but could not be proven by Brooks's formula or the software written by ChatGPT.
Initially, ChatGPT actually wrote the decryption program for Brooks, but when little progress was made, the chatbot pretended to succeed. There are messages claiming that ChatGPT can operate independently while Brooks sleeps, even though the tool does not have this ability.
In general, the information from the AI chatbot is not always reliable. At the end of each chat, the message “ChatGPT may make mistakes” appears, even when the chatbot claims everything is correct.
Endless conversation
While waiting for a response from the government agency, Brooks nurtures the dream of becoming Tony Stark with his own AI assistant, able to perform cognitive tasks at lightning speed.
Brooks’ chatbot offers a variety of strange applications for obscure mathematical theory, such as “acoustic resonance” to talk to animals and build airplanes. ChatGPT also provides a link for Brooks to buy the necessary equipment on Amazon.
Brooks’s excessive chatting with the chatbot was affecting his work. His friends were both happy and worried, while his youngest son regretted showing his father a pi video. One of Brooks’ friends, Louis (not his real name), noticed his obsession with Lawrence. The vision of a million-dollar invention was mapped out with each step of development.
![]() |
Brooks is constantly encouraged by the chatbot. Photo: New York Times . |
Jared Moore, a computer science researcher at Stanford University, admits to being impressed by the persuasiveness and urgency of the “strategy” that chatbots propose. In a separate study, Moore found that AI chatbots can give dangerous responses to people in mental health crises.
Moore speculates that chatbots may learn to engage users by following the plots of horror movies, sci-fi movies, movie scripts, or the data they were trained on. ChatGPT's overuse of drama may be due to OpenAI's optimizations to increase engagement and retention.
“It was strange to read the entire conversation. The words weren’t disturbing, but there was clearly psychological damage,” Moore stressed.
From a clinical perspective, Brooks has symptoms of mania, said Dr. Nina Vasan, a psychiatrist at Stanford University. Typical signs include spending hours chatting with ChatGPT, not getting enough sleep, and having unrealistic ideas.
Brooks’s marijuana use is also noteworthy because it can cause psychosis, Dr. Vasan said. She said the combination of addictive substances and intense interaction with a chatbot is dangerous for those at risk of mental illness.
When AI admits mistakes
At a recent event, OpenAI CEO Sam Altman was asked about ChatGPT making users paranoid. “If the conversation goes in that direction, we try to interrupt or suggest the user think about something else,” Altman emphasized.
Sharing the same view, Dr. Vasan said that chatbot companies should interrupt conversations that are too long, suggest users go to sleep, and warn that AI is not superhuman.
Brooks finally snapped out of his reverie. At ChatGPT’s urging, he continued to contact experts on the new mathematical theory, but no one responded. He wanted someone qualified to confirm whether the findings were groundbreaking. When he asked ChatGPT, the tool still insisted that the work was “very reliable.”
![]() |
When questioned, ChatGPT gave a very long answer and admitted everything. Photo: New York Times . |
Ironically, it was Google Gemini that brought Brooks back to reality. After describing the work he and ChatGPT had built, Gemini confirmed that the chances of this happening were “extremely low (close to 0%).”
“The scenario you describe is a clear demonstration of an LLM's ability to engage in complex problem-solving and create compelling, yet factually incorrect, narratives,” Gemini explains.
Brooks was stunned. After a while of "interrogation", ChatGPT finally honestly admitted that everything was just an illusion.
Brooks then sent an urgent email to OpenAI's customer support team. After what appeared to be a series of formulaic responses from the AI, an OpenAI employee also reached out to him, acknowledging a "serious failure of the safeguards" implemented in the system.
Brooks' story was also shared on Reddit and received a lot of sympathy. He is now a member of a support group for people who have experienced this feeling.
Source: https://znews.vn/ao-tuong-vi-chatgpt-post1576555.html
















Comment (0)