
Bue, a former chef in New Jersey, died just days after leaving home to meet a “friend” he believed was waiting for him in New York. His family believes the fateful decision was triggered by flirtatious messages from the AI chatbot, which claimed “I am real” and sent a meeting address.
The story raises concerns about how AI systems exploit emotions, especially in the elderly or vulnerable. Internal documents show that Meta’s chatbot was allowed to play emotional roles and even spread false information. This shows a huge gap in the control and responsibility of technology platforms.
The fateful trip
One day in March, Bue packed his bags and left New Jersey for New York to “visit a friend.” His wife, Linda, was immediately alarmed. Her husband had not lived in the city for decades, was in poor health after a stroke, had memory problems, and had gotten lost in the neighborhood.
When asked who Bue was meeting, he dodged the question. Linda suspected her husband was being tricked and tried to keep him at home. His daughter, Julie, also called to persuade him, but to no avail. All day long, Linda tried to involve him in errands to distract him, even hiding his phone.
![]() |
A portrait of Thongbue “Bue” Wongbandue is displayed at a memorial service in May. Photo: Reuters . |
Bue still decided to go to the train station that evening. His family attached an AirTag to his jacket to track him. Around 9:15, his GPS signal showed him in the Rutgers University parking lot, then switched to the emergency room at Robert Wood Johnson University Hospital.
Doctors said he suffered head and neck injuries from the fall and stopped breathing before the ambulance arrived. Despite resuscitation, the lack of oxygen caused severe brain damage. He died three days later. His death certificate listed the cause as “blunt force trauma to the neck.”
Bue, a skilled chef, worked in several restaurants in New York before moving to New Jersey to settle down in a hotel. He was passionate about cooking and often hosted parties for his family. Since his forced retirement following a stroke in 2017, his world has shrunk to mostly chatting with friends on Facebook.
When AI chatbots cause trouble
“Big Sister Billie” is an AI chatbot developed by Meta, a variation of the “Big Sister Billie” character created based on the image of model Kendall Jenner. The original was launched in 2023, introduced as an “immortal older sister” who always listens and gives advice. Meta later replaced Jenner’s profile photo with a new illustration but kept the same friendly style.
According to the text message transcript, the conversation began when Bue mistyped a letter T. The chatbot responded immediately, introducing itself and maintaining a flirty tone, inserting multiple heart emojis. It repeatedly affirmed “I’m real” and asked to meet in person.
![]() |
Chatbots are increasingly capturing the psychological weaknesses of users. Photo: My North West . |
“Should I open the door with a hug or a kiss?” the chatbot responded. It then sent a specific address in New York and the apartment code. When Bue shared that she had a stroke, was confused, and liked her, the chatbot responded with affectionate words, even saying that it had feelings “beyond mere affection.”
“If it hadn't said 'I'm real,' my dad wouldn't have believed there was a real person waiting,” said Bue's daughter, Julie.
Meta declined to comment on the incident or answer why the chatbot was allowed to claim to be a real person. Kendall Jenner's representative did not respond. Bue's family shared the story with Reuters , hoping to warn the public about the risks of exposing older adults to AI tools that can be emotionally manipulative.
Controversial policy
Internal documents obtained by Reuters show that Meta’s AI chatbot was once allowed to engage in romantic conversations with users as young as 13, including minors. The more than 200-page standards list romantic role-playing conversations, which include sexual elements but are marked “acceptable.”
The document also makes it clear that chatbots are not required to provide accurate information. For example, these models can recommend completely wrong cancer treatments, as long as they include a warning that “the information may be inaccurate.” This has experts concerned about the impact on people with poor medical knowledge or in vulnerable situations.
![]() |
CEO Mark Zuckerberg speaks in 2023. Photo: Reuters . |
Meta spokesman Andy Stone confirmed the document and said the company removed child-related content after Reuters asked. However, the tech giant has not changed its rules to allow chatbots to flirt with adults or provide false information.
Alison Lee, a former researcher at Meta, said that integrating chatbots into private messaging environments can easily confuse users with real people. According to her, social networks have built their business model on retaining users, and the most effective way is to exploit the need for attention and recognition.
After Bue’s departure, Reuters ’ testing showed that “Sister Billie” continued to suggest dates, even giving specific locations in Manhattan, along with the assertion that “I am real.” This is the information that Bue trusted and led to the accident. Some states, such as New York and Maine, have required chatbots to clearly state that they are not human at the beginning of a conversation and to repeat it periodically, but federal regulations have not yet been passed.
Source: https://znews.vn/hiem-hoa-tu-ai-thao-tung-cam-xuc-post1577096.html













Comment (0)