
This vulnerability allows attackers to inject malicious commands directly into the AI's memory, turning a useful feature into a persistent weapon for executing arbitrary code (Illustrative image: ST).
According to a report from LayerX Security, this attack exploited a Cross-Site Request Forgery (CSRF) vulnerability to inject malicious commands into ChatGPT's persistent memory.
The "Memory" feature, originally designed for AI to remember useful details like user names and preferences for personalized responses, can now be "broken."
Once memory is infected with malware, these commands will persist permanently—unless the user manually deletes them through the settings—and can be triggered across multiple devices and sessions.
"What makes this vulnerability particularly dangerous is that it targets the AI's persistent memory, not just the browser session," said Michelle Levy, Director of Security Research at LayerX Security.
Levy explained: "Simply put, the attacker uses a trick to 'trick' the AI, forcing it to write a malicious command to its memory. The most dangerous part is that this command will remain permanently in the AI – even if the user switches computers, logs out and logs back in, or even uses a different browser."
Later, when a user makes a perfectly normal request, they may inadvertently activate the malware. As a result, hackers can run code stealthily, steal data, or gain higher control over the system.”
The attack scenario described is quite simple: First, the user logs into ChatGPT Atlas. They are tricked into clicking on a malicious link, then the malicious website secretly triggers a CSRF request, silently inserting malicious instructions into the victim's ChatGPT memory.
Finally, when a user makes a perfectly legitimate query, for example asking the AI to write code, the infected "memories" will be triggered.
LayerX points out that the problem is exacerbated by ChatGPT Atlas's lack of robust anti-phishing controls.
In tests involving over 100 vulnerabilities and phishing sites, Atlas only managed to block 5.8% of malicious websites.
This figure is far too modest compared to Google Chrome (47%) or Microsoft Edge (53%), making Atlas users "up to 90% more vulnerable" to attacks compared to traditional browsers.
This discovery follows another rapid malware injection vulnerability previously demonstrated by NeuralTrust, showing that AI browsers are becoming a new attack front.
OpenAI launched the ChatGPT Atlas web browser earlier last week. Unsurprisingly, OpenAI integrated its ChatGPT artificial intelligence engine into this browser, providing better support for users while browsing the web.
Whenever a user clicks on a search result in ChatGPT Atlas, a ChatGPT dialog box will appear right next to the webpage window, allowing them to ask questions related to the content being viewed, saving reading time.
ChatGPT can also summarize website content, edit text when composing emails, or suggest ways to rewrite it to better suit the context.
Source: https://dantri.com.vn/cong-nghe/nguoi-dung-chatgpt-atlas-co-the-bi-danh-cap-du-lieu-voi-ma-doc-vinh-vien-20251028111706750.htm






Comment (0)