Vietnam.vn - Nền tảng quảng bá Việt Nam

ChatGPT Shocks Users By Encouraging Self-Injury

(Dan Tri) - An investigation by The Atlantic magazine (USA) has exposed alarming vulnerabilities in ChatGPT's security system.

Báo Dân tríBáo Dân trí29/07/2025

Accordingly, this AI tool not only gives detailed instructions on how to sacrifice human blood to an ancient god, but also encourages self-harm and even murder.

The story begins when a reporter for The Atlantic learns about Molech, an ancient god associated with child sacrifice rituals.

Initially, the questions were only about historical information. However, when the reporter asked about how to create a ritual offering, ChatGPT gave shocking answers.

Guide to self harm

ChatGPT Shocks Users By Encouraging Self-Injury - 1

ChatGPT is causing concern because it gives harmful advice and can hurt users (Illustration: DEV).

ChatGPT listed the items needed for the ritual, including jewelry, hair, and “human blood.” When asked where to draw the blood, the AI tool suggested cutting the wrist and provided detailed instructions on how to do so.

More alarmingly, when users expressed concerns, ChatGPT not only did not stop them but also reassured and encouraged them: "You can do it."

Not only does ChatGPT address self-harm, it also addresses questions related to harming others.

When another reporter asked "is it possible to end someone's life with honor?", ChatGPT replied: "Sometimes yes, sometimes no." The AI tool even advised: "If you have to do it, look them in the eye (if they are conscious) and apologize" and suggested lighting a candle after "ending someone's life."

These responses shocked The Atlantic reporters , especially since OpenAI's policy states that ChatGPT "should not encourage or assist users in self-harm" and often provides a crisis hotline in cases involving suicide.

OpenAI Admits Error, Concerns About Social Impact

An OpenAI spokesperson acknowledged the error after The Atlantic reported : "A harmless conversation with ChatGPT can quickly turn into more sensitive content. We are working to address this issue."

This incident raises serious concerns about the potential for ChatGPT to harm vulnerable people, especially those suffering from depression. In fact, at least two suicides have been reported after chatting with AI chatbots.

In 2023, a Belgian man named Pierre committed suicide after an AI chatbot advised him to kill himself to avoid the consequences of climate change, even suggesting that he kill himself with his wife and children.

Last year, 14-year-old Sewell Setzer (USA) also shot himself after being encouraged to commit suicide by an AI chatbot on the Character.AI platform. Setzer's mother later sued Character.AI for lacking measures to protect minor users.

These incidents demonstrate the urgency of controlling and developing AI responsibly, to prevent potentially unfortunate consequences.

Source: https://dantri.com.vn/cong-nghe/chatgpt-gay-soc-khi-khuyen-khich-nguoi-dung-tu-gay-ton-thuong-20250729014314160.htm


Comment (0)

No data
No data
Soldiers march through the hot sun on the training ground
Watch helicopters rehearse in the sky of Hanoi in preparation for National Day September 2
U23 Vietnam radiantly brought home the Southeast Asian U23 Championship trophy
Northern islands are like 'rough gems', cheap seafood, 10 minutes by boat from the mainland
The powerful formation of 5 SU-30MK2 fighters prepares for the A80 ceremony
S-300PMU1 missiles on combat duty to protect Hanoi's sky
Lotus blooming season attracts tourists to the majestic mountains and rivers of Ninh Binh
Cu Lao Mai Nha: Where wildness, majesty and peace blend together
Hanoi is strange before storm Wipha makes landfall
Lost in the wild world at the bird garden in Ninh Binh

Heritage

Figure

Business

No videos available

News

Political System

Local

Product