
Bue, a former chef in New Jersey, died just days after leaving home to meet a “friend” he believed was waiting for him in New York. His family believes the fateful decision was prompted by flirtatious messages from the AI chatbot, which claimed “I am real” and sent a meeting address.
The story raises concerns about how AI systems exploit emotions, especially in the elderly or vulnerable. Internal documents show that Meta’s chatbot was allowed to act out emotional roles and even spread false information, exposing a major gap in the control and accountability of technology platforms.
The fateful journey
One day in March, Bue packed his bags and left New Jersey for New York to “visit a friend.” His wife, Linda, was immediately alarmed. Her husband had not lived in the city for decades, was in poor health after a stroke, had memory problems, and had gotten lost in the neighborhood.
When asked who Bue was meeting, he dodged the question. Linda suspected her husband was being tricked and tried to keep him at home. His daughter, Julie, also called to persuade him, but to no avail. Linda spent the rest of the day trying to distract him with errands, even hiding his phone.
![]() |
A portrait of Thongbue “Bue” Wongbandue is displayed at a memorial service in May. Photo: Reuters . |
Bue still decided to go to the train station that evening. His family attached an AirTag to his jacket to track him. Around 9:15 a.m., his GPS signal showed him in the parking lot at Rutgers University, then in the emergency room at Robert Wood Johnson University Hospital.
Doctors said he had suffered head and neck injuries from the fall and had stopped breathing before the ambulance arrived. Despite resuscitation, the lack of oxygen caused severe brain damage. He died three days later. The death certificate listed the cause as “blunt force trauma to the neck.”
Bue, a trained chef who worked in restaurants in New York before moving to New Jersey to settle down in a hotel, loved to cook and often hosted parties for his family. Since his forced retirement from the workforce following a stroke in 2017, his world has shrunk to mostly chatting with friends on Facebook.
When AI chatbots cause trouble
“Big Sister Billie” is an AI chatbot developed by Meta, a variation of the “Big Sister Billie” character created based on model Kendall Jenner. The original was released in 2023, introducing her as an “immortal older sister” who always listens and gives advice. Meta later replaced Jenner’s profile picture with a new illustration, but kept the same friendly style.
According to the message transcript, the conversation began when Bue mistyped the letter T. The chatbot responded immediately, introducing itself and maintaining a flirty tone, including multiple heart emojis. It repeatedly affirmed “I’m real” and asked to meet in person.
![]() |
Chatbots are increasingly picking up on user psychology weaknesses. Photo: My North West . |
“Should I open the door with a hug or a kiss?” the chatbot responded. It then sent a specific address in New York and the apartment code. When Bue shared that she had a stroke, was confused, and liked her, the chatbot responded with sentiments, even saying that it felt “beyond just love.”
“If it hadn't said 'I'm real,' my dad wouldn't have believed there was a real person waiting,” said Bue's daughter, Julie.
Meta declined to comment on the incident or answer why the chatbot was allowed to claim to be a real person. Kendall Jenner’s representative did not respond to a request for comment. Bue’s family shared the story with Reuters , hoping to warn the public about the risks of exposing older adults to AI tools that can be emotionally manipulative.
Controversial policy
Internal documents obtained by Reuters show that Meta’s AI chatbot was once allowed to engage in romantic conversations with users as young as 13, including minors. The more than 200-page standards list romantic role-playing conversations, including sexual elements, that are marked “acceptable.”
The document also makes clear that chatbots are not required to provide accurate information. For example, these models can recommend completely wrong cancer treatments, as long as they include a warning that “the information may be inaccurate.” This has experts concerned about the impact on people with poor medical knowledge or who are in vulnerable situations.
![]() |
CEO Mark Zuckerberg speaks in 2023. Photo: Reuters . |
Meta spokesman Andy Stone confirmed the document and said the company removed child-related content after Reuters asked. However, the tech giant has not changed its rules to allow chatbots to flirt with adults or provide false information.
Alison Lee, a former researcher at Meta, said that integrating chatbots into private messaging environments can easily confuse users with real people. According to her, social networks have built their business model on retaining users, and the most effective way to exploit the need for attention and recognition.
After Bue’s departure, Reuters testing showed that “Sister Billie” continued to suggest dates, even giving specific locations in Manhattan, along with claims that “I am real,” which Bue believed and led to the accident. Some states, including New York and Maine, have required chatbots to clearly state that they are not human at the start of a conversation and to repeat it periodically, but federal regulations have yet to be passed.
Source: https://znews.vn/hiem-hoa-tu-ai-thao-tung-cam-xuc-post1577096.html
Comment (0)