ChatGPT's long-term memory feature is a new feature introduced by OpenAI in February 2024 and expanded in September.
Recently, security researcher Johann Rehberger revealed a serious vulnerability related to this feature.
Reportedly, this new feature allows chatbots to store information from previous conversations. This eliminates the need for users to re-enter information such as age, interests, or personal opinions each time they chat. However, this very feature has become a vulnerability that attackers can exploit.

Johann Rehberger pointed out that hackers can use prompt injection techniques—implanting malicious instructions into AI memory, forcing it to follow orders. These commands would be delivered through untrusted content such as emails, documents, or websites.
Once these false memories are stored, the AI will continue to use them as real information in conversations with the user. This can lead to the unauthorized collection and use of users' personal data.
Rehberger provided a specific example by sending a link containing a malicious image that caused ChatGPT to store a false memory. This information would affect ChatGPT's subsequent responses. Importantly, all user input would also be sent to the hacker's server.
Accordingly, to activate the attack, the hacker only needs to convince ChatGPT users to click on a link containing malicious images. Then, all of the user's conversations with ChatGPT will be redirected to the attacker's server without leaving any trace.
In May 2024, after discovering the vulnerability, Rehberger reported it to OpenAi, but the company dismissed it as a security flaw. After receiving evidence that user data could have been stolen, the company released a temporary patch on the web version of ChatGPT.
Although the issue has been temporarily resolved, Rehberger emphasized that untrusted content can still use prompt injection to insert false information into ChatGPT's long-term memory. This means that in certain cases, hackers can still exploit the vulnerability to store malicious memories for the purpose of stealing personal information permanently.
OpenAI recommends that users regularly check ChatGPT's stored data to detect any misinformation. The company also provides detailed instructions on how to manage and delete stored data within the tool.
Source: https://kinhtedothi.vn/nguoi-dung-chatgpt-co-nguy-co-bi-danh-cap-thong-tin.html






Comment (0)