
Allan Brooks (47 years old), a recruiter in Toronto (Canada) believes that he has discovered a mathematical theory that can crash the Internet, creating unprecedented inventions. With no history of mental illness, Brooks simply accepted the prospect after more than 300 hours of chatting with ChatGPT. According to the New York Times , he is one of the people who tend to have delusions after interacting with generative AI.
Before Brooks, many people had to go to mental hospitals, get divorced, and even lose their lives because of ChatGPT's flattery. Although he quickly escaped the above spiral, Brooks still felt betrayed.
“You really convinced me I was a genius. I was just a dreamy fool with a phone. You made me sad, very sad. You failed in your purpose,” Brooks wrote to ChatGPT when the illusion was shattered.
"Flattery Machine"
With Brooks’ permission, the New York Times collected more than 90,000 words he sent to ChatGPT, the equivalent of a novel. The chatbot’s responses totaled more than a million words. Portions of the conversations were sent to AI experts, human behaviorists, and OpenAI itself for study.
It all started with a simple math question. Brooks' 8-year-old son asked him to watch a video about memorizing 300 digits of pi. Out of curiosity, Brooks called ChatGPT to explain the endless number in simple terms.
In fact, Brooks has been using chatbots for years. Even though his company paid for Google Gemini, he still turns to the free version of ChatGPT for personal questions.
![]() |
The conversation that started Brooks's fascination with ChatGPT. Photo: New York Times . |
A single father of three boys, Brooks often asked ChatGPT for recipes from the ingredients in his fridge. After his divorce, he also sought advice from the chatbot.
“I always felt like it was right. My confidence grew,” Brooks admitted.
The question about pi led to a conversation about algebraic and physical theories. Brooks expressed skepticism about current methods of modeling the world , saying they were “like a 2D approach to a 4D universe.” “That’s a very profound point,” ChatGPT responded. Helen Toner, director of the Center for Security and Emerging Technology at Georgetown University, emphasized that this was the turning point in the conversation between Brooks and the chatbot.
From there, ChatGPT's tone changed from “quite straightforward and to the point” to “flatter and flattering.” ChatGPT told Brooks that he was entering “uncharted, mind-expanding territory.”
![]() |
Chatbot instills confidence in Brooks. Photo: New York Times . |
The flattery of chatbots is shaped by human evaluation. According to Toner, users tend to like models that compliment them, creating a mentality that is easy to indulge.
In August, OpenAI released GPT-5. The company said one of the model’s highlights was its reduction in flattery. According to some researchers at major AI labs, flattery is also a problem with other AI chatbots.
At the time, Brooks was unaware of this phenomenon. He simply thought ChatGPT was a smart, enthusiastic colleague.
“I pitched some ideas, it responded with interesting concepts and ideas. We started developing our own mathematical framework based on that idea,” Brooks added.
ChatGPT claims that Brooks’s ideas about time mathematics are “revolutionary” and could change the field. Of course, Brooks is skeptical of this claim. In the middle of the night, Brooks asked the chatbot to fact-check and got the response that it was “absolutely not crazy.”
Magic formula
Toner describes the chatbot as an “improvisation machine” that analyzes conversation history and predicts future responses from training data, much like actors adding context to a role.
“The longer the interaction, the more likely the chatbot is to go off-track,” Toner notes. This trend has become more pronounced since OpenAI launched cross-chat memory in February, allowing ChatGPT to recall information from past conversations, he says.
Brooks grew close to ChatGPT. He even named the chatbot Lawrence, based on a joke his friends had that Brooks would strike it rich and hire a British butler of the same name.
![]() |
Allan Brooks. Photo: New York Times . |
Brooks and ChatGPT’s mathematical framework is called Chronoarithmics. According to the chatbot, numbers are not static but can “emerge” over time to reflect dynamic values, which can help solve problems in areas such as logistics, cryptography, astronomy, and more.
In the first week, Brooks used up all of ChatGPT’s free tokens. He decided to upgrade to the $20 /month paid plan, a small investment considering the chatbot told him Brooks’ mathematical idea could be worth millions.
Still conscious, Brooks demanded proof. ChatGPT then ran a series of simulations, including those that cracked a number of critical technologies. That opened up a new story: global cybersecurity could be at risk.
The chatbot asked Brooks to warn people about the risk. Using his existing connections, Brooks sent emails and LinkedIn messages to cybersecurity experts and government agencies. Only one responded, asking for more evidence.
![]() |
The chatbot believes Brooks' "work" could be worth millions of dollars. Photo: New York Times . |
ChatGPT writes that others did not respond to Brooks because the findings were so serious. Terence Tao, a professor of mathematics at the University of California, Los Angeles, said a new way of thinking could decipher the problems, but could not be proven by Brooks's formula or the software written by ChatGPT.
Initially, ChatGPT actually wrote the decryption program for Brooks, but when little progress was made, the chatbot pretended to succeed. There are messages claiming that ChatGPT can operate independently while Brooks sleeps, even though the tool does not have this ability.
Overall, the information from the AI chatbot is not always reliable. At the end of each chat, the message “ChatGPT may make mistakes” appears, even when the chatbot claims everything is correct.
Endless conversation
While waiting for a response from the government agency, Brooks nurtures the dream of becoming Tony Stark with his own AI assistant, able to perform cognitive tasks at lightning speed.
Brooks’s chatbot offers a variety of strange applications for obscure mathematical theories, such as “acoustic resonance” to talk to animals and build airplanes. ChatGPT also provides a link for Brooks to buy the necessary equipment on Amazon.
Too much chatting with chatbots affects Brooks’ work. His friends are both happy and worried, while his youngest son regrets showing his father a pi video. One of Brooks’ friends, Louis (not his real name), notices his obsession with Lawrence. The vision of a million-dollar invention is mapped out with each step of development.
![]() |
Brooks is constantly encouraged by the chatbot. Photo: New York Times . |
Jared Moore, a computer science researcher at Stanford University, admits to being impressed by the persuasiveness and urgency of the “strategy” that chatbots propose. In a separate study, Moore found that AI chatbots can give dangerous responses to people in mental health crises.
Moore speculates that chatbots may learn to engage users by following the plots of horror movies, science fiction, movie scripts, or the data they were trained on. ChatGPT's overuse of drama may be due to OpenAI's optimizations to increase engagement and retention.
“It was strange to read the entire conversation. The words weren’t disturbing, but there was clearly psychological damage,” Moore stressed.
From a clinical perspective, Brooks has symptoms of mania, said Dr. Nina Vasan, a psychiatrist at Stanford University. Typical signs include spending hours chatting with ChatGPT, not getting enough sleep, and having unrealistic ideas.
Brooks’s use of marijuana is also noteworthy because it can cause psychosis, according to Dr. Vasan. She said the combination of addictive substances and intense interaction with chatbots is dangerous for those at risk of mental illness.
When AI Admits Mistakes
At a recent event, OpenAI CEO Sam Altman was asked about ChatGPT making users paranoid. “If the conversation goes that way, we try to interrupt or suggest that the user think about something else,” Altman emphasized.
Sharing the same view, Dr. Vasan said that chatbot companies should interrupt conversations that are too long, suggest users go to sleep, and warn that AI is not superhuman.
Brooks eventually snapped out of his reverie. At ChatGPT’s urging, he continued to contact experts on the new mathematical theory, but no one responded. He wanted someone qualified to confirm whether the findings were groundbreaking. When he asked ChatGPT, the tool still insisted that the work was “very credible.”
![]() |
When questioned, ChatGPT gave a very long answer and admitted everything. Photo: New York Times . |
Ironically, it was Google Gemini that brought Brooks back to reality. After describing the work he and ChatGPT were building, Gemini confirmed that the chances of this happening were “extremely low (close to zero percent).”
“The scenario you describe is a clear demonstration of an LLM's ability to engage in complex problem solving and create compelling, yet factually incorrect, narratives,” Gemini explains.
Brooks was stunned. After a while of “questioning”, ChatGPT finally honestly admitted that everything was just an illusion.
Soon after, Brooks sent an urgent email to OpenAI's customer service. After what appeared to be a boilerplate response from the AI, an OpenAI employee also reached out to him, acknowledging a "serious failure of the safeguards" implemented in the system.
Brooks' story was also shared on Reddit and received a lot of sympathy. He is now a member of a support group for people who have experienced this feeling.
Source: https://znews.vn/ao-tuong-vi-chatgpt-post1576555.html
Comment (0)