Vietnam.vn - Nền tảng quảng bá Việt Nam

ChatGPT users at risk of information theft

Báo Kinh tế và Đô thịBáo Kinh tế và Đô thị30/09/2024


ChatGPT's long-term memory feature is a new feature introduced by OpenAI in February 2024 and expanded in September.

Recently, security researcher Johann Rehberger recently revealed a serious vulnerability related to this feature.

It is known that this new feature helps chatbots store information from previous conversations. Thanks to that, users do not have to re-enter information such as age, interests or personal views every time they chat. However, this has become a weakness for attackers to exploit.

ChatGPT users at risk of information theft
ChatGPT users at risk of information theft

Johann Rehberger showed that hackers could use a technique called prompt injection—inserting malicious instructions into the memory, forcing the AI to obey. These commands would be delivered through untrusted content such as emails, documents, or websites.

Once these fake memories are stored, the AI will continue to use them as real information in conversations with users, which could lead to the collection and misuse of users' personal data.

Rehberger provided a specific example by sending a link containing a malicious image that caused ChatGPT to store a false memory. This information would affect ChatGPT's future responses. In particular, any information the user entered would also be sent to the hacker's server.

Accordingly, to trigger the attack, the hacker only needs to convince ChatGPT users to click on a link containing a malicious image. After that, all of the user's chats with ChatGPT will be redirected to the attacker's server without leaving any trace.

Rehberger reported the bug to OpenAi in May 2024, but the company only considered it a security flaw. After receiving evidence that user data could be stolen, the company released a temporary patch on the web version of ChatGPT.

While the issue has been temporarily fixed, Rehberger notes that untrusted content can still use Prompt injection to insert fake information into ChatGPT's long-term memory. This means that in certain cases, hackers could still exploit the vulnerability to store malicious memories to steal personal information long-term.

OpenAI recommends that users regularly check ChatGPT's stored memories for false positives, and the company also provides detailed instructions on how to manage and delete memories stored in the tool.



Source: https://kinhtedothi.vn/nguoi-dung-chatgpt-co-nguy-co-bi-danh-cap-thong-tin.html

Comment (0)

No data
No data
Soldiers march through the hot sun on the training ground
Watch helicopters rehearse in the sky of Hanoi in preparation for National Day September 2
U23 Vietnam radiantly brought home the Southeast Asian U23 Championship trophy
Northern islands are like 'rough gems', cheap seafood, 10 minutes by boat from the mainland
The powerful formation of 5 SU-30MK2 fighters prepares for the A80 ceremony
S-300PMU1 missiles on combat duty to protect Hanoi's sky
Lotus blooming season attracts tourists to the majestic mountains and rivers of Ninh Binh
Cu Lao Mai Nha: Where wildness, majesty and peace blend together
Hanoi is strange before storm Wipha makes landfall
Lost in the wild world at the bird garden in Ninh Binh

Heritage

Figure

Business

No videos available

News

Political System

Local

Product