Vietnam.vn - Nền tảng quảng bá Việt Nam

Microsoft Copilot Vulnerability: New Warning on AI Data Leak Risk

A serious security flaw in Microsoft Copilot allows users' data to be stolen without any action. The incident raises a big question: Is artificial intelligence helping us, or putting us at risk?

Báo Tuổi TrẻBáo Tuổi Trẻ20/06/2025

Lỗ hổng bảo mật - Ảnh 1.

EchoLeak and the potential risks of artificial intelligence

As artificial intelligence (AI) becomes a part of every job, from assisting in writing reports, replying to emails to analyzing data, users seem to be living in an era of unprecedented convenience. But the dark side of convenience is also starting to emerge, especially when it comes to security.

A recent security vulnerability dubbed EchoLeak has left users of the Microsoft Copilot service at risk of having their sensitive data leaked without them taking any action.

When AI becomes a security vulnerability

According to Tuoi Tre Online 's research, EchoLeak is a security vulnerability that has just been recorded with the code CVE-2025-32711, which experts have rated as dangerous at 9.3/10 according to NIST's scale.

What worries security experts is its "zero-click" nature: attackers can exploit data from Copilot without the user having to click, open a file, or even know anything is happening.

This is not a simple bug. The research team at Aim Labs, which discovered the vulnerability, believes EchoLeak reflects a common design flaw in RAG (retrieval-augmented generation) and agent-based AI systems. Because Copilot is part of the Microsoft 365 suite of applications that holds the emails, documents, spreadsheets, and meeting schedules of millions of users, the risk of data leakage is very serious.

The problem lies not just in the specific code, but in the way large language models (LLMs) operate. AIs need a lot of context to respond accurately, and so they are given access to large amounts of background data. Without clear controls on the input and output, AIs can be “driven” in ways that users are not aware of. This creates a new kind of “backdoor” not because of a flaw in the code, but because AIs behave outside of human understanding.

Microsoft quickly released a patch, and no real-world damage has been reported so far. But the lesson from EchoLeak is clear: When AI is deeply integrated into working systems, even small errors in how it understands context can have major security consequences.

The more convenient AI, the more fragile personal data

The EchoLeak incident raises a worrying question: are users so trusting of AI that they don’t realize they can be tracked or have their personal information exposed with just a text message? A newly discovered vulnerability that allows hackers to silently extract data without users having to press any buttons is something that was once only seen in science fiction movies, but is now a reality.

While AI applications are increasingly popular from virtual assistants like Copilot, chatbots in banking, education , to AI platforms that write content, process emails, most people are not warned about how their data is processed and stored.

“Chatting” with an AI system is no longer just about asking a few questions for convenience, but can also inadvertently reveal your location, habits, emotions, or even account information.

In Vietnam, many people are familiar with using AI on phones and computers without basic knowledge of digital security . Many people share private information with AI because they believe "it's just a machine". But in reality, behind it is a system that can record, learn and transmit data elsewhere, especially when the AI ​​platform comes from a third party and has not been clearly tested for security.

To limit risks, users do not necessarily have to give up technology, but need to be more aware: they should carefully check whether the AI ​​application they are using has a reliable source, whether the data is encrypted, and especially do not share sensitive information such as ID numbers, bank accounts, health information... with any AI system without being clearly warned.

Just like when the internet was first born, AI also needs time to perfect and during that time, users should be the first to proactively protect themselves.

Are you ever sharing too much with AI?

When typing in the command “rewrite this report for me but in a more friendly way” or “summarize yesterday’s meeting”, many people do not think that all the information they enter, including internal details, personal feelings or work habits, can be recorded by AI. We have gradually become accustomed to chatting with smart tools, forgetting the boundary between convenience and privacy.

Back to topic
TUAN VI

Source: https://tuoitre.vn/lo-hong-bao-mat-trong-microsoft-copilot-canh-bao-moi-ve-nguy-co-ro-ri-du-lieu-tu-ai-20250620103045262.htm


Comment (0)

No data
No data

Same tag

Same category

Vietnam wins Intervision 2025 music competition
Mu Cang Chai traffic jam until evening, tourists flock to hunt for ripe rice season
The peaceful golden season of Hoang Su Phi in the high mountains of Tay Con Linh
Village in Da Nang in top 50 most beautiful villages in the world 2025

Same author

Heritage

Figure

Enterprise

No videos available

News

Political System

Destination

Product