
EchoLeak and the potential risks from artificial intelligence.
As artificial intelligence (AI) becomes an integral part of every task, from assisting with report writing and email responses to data analysis, users seem to be living in an era of unprecedented convenience. But the downside of this convenience is also beginning to emerge, particularly in security matters.
A recent security vulnerability, dubbed EchoLeak, has put users of the Microsoft Copilot service at risk of sensitive data leaks without requiring any action.
When AI becomes a security vulnerability
According to Tuoi Tre Online 's investigation, EchoLeak is a newly identified security vulnerability with the code CVE-2025-32711, which experts have rated as dangerous at 9.3/10 on the NIST scale.
What worries security experts is its "zero-click" nature: attackers can exploit data from Copilot without the user clicking, opening files, or even knowing that anything is happening.
This is no simple bug. The Aim Labs research team, which discovered the vulnerability, believes EchoLeak reflects a common design flaw in RAG (retrieval-augmented generation) based AI systems and agents. Because Copilot is part of the Microsoft 365 suite of applications, which holds emails, documents, spreadsheets, and meeting calendars for millions of users, the risk of data leakage is very serious.
The problem isn't just with the specific code snippet, but with how large model languages (LLMs) operate. AI needs a lot of context to provide accurate responses, and therefore, it's granted access to vast amounts of background data. Without clear control over the input-output flow, AI can be completely manipulated without the user's knowledge. This creates a new kind of "backdoor," not due to a vulnerability in the code, but because AI behaves beyond human comprehension.
Microsoft quickly released a patch, and no actual losses have been reported so far. But the lesson from EchoLeak is clear: When AI is deeply integrated into work systems, even a small misstep in how it understands context can lead to major security consequences.
The more convenient AI becomes, the more vulnerable personal data becomes.
The EchoLeak incident raises a worrying question: are users placing too much trust in AI without realizing they could be tracked or have their personal information exposed after just one message? The newly discovered vulnerability, which allows hackers to silently extract data without the user pressing any buttons, is something once only seen in science fiction, but has now become a reality.
While AI applications are becoming increasingly popular, from virtual assistants like Copilot and chatbots in banking and education , to AI platforms for content writing and email processing, most people are not warned about how their data is processed and stored.
"Chatting" with an AI system is no longer just about sending a few convenient questions; it can inadvertently reveal your location, habits, emotions, or even account information.
In Vietnam, many people are familiar with using AI on their phones and computers without basic knowledge of digital security . Many share private information with AI, believing that "it's just a machine." However, in reality, behind it is a system capable of recording, learning, and transmitting data elsewhere, especially when the AI platform comes from a third party and has not been thoroughly vetted for security.
To mitigate risks, users don't necessarily have to abandon technology, but they need to be more aware: they should carefully check if the AI applications they are using come from a trustworthy source, if the data is encrypted, and especially, they should not share sensitive information such as ID numbers, bank account details, health information, etc., with any AI system without clear warning.
Just like when the internet first emerged, AI also needs time to mature, and during that time, users should be the first to proactively protect themselves.
Have you ever shared too much information with AI?
When typing commands like "rewrite this report for me, but in a gentler way" or "summarize yesterday's meeting," many people don't realize that all the information they enter, including internal details, personal feelings, or work habits, can be recorded by AI. We're becoming accustomed to interacting with intelligent tools, forgetting the boundary between convenience and privacy.
Source: https://tuoitre.vn/lo-hong-bao-mat-trong-microsoft-copilot-canh-bao-moi-ve-nguy-co-ro-ri-du-lieu-tu-ai-20250620103045262.htm






Comment (0)