Vietnam.vn - Nền tảng quảng bá Việt Nam

Microsoft Copilot Vulnerability: New Warning on AI Data Leak Risk

A serious security flaw in Microsoft Copilot allows users' data to be stolen without any action. The incident raises a big question: Is artificial intelligence helping us, or putting us at risk?

Báo Tuổi TrẻBáo Tuổi Trẻ20/06/2025

Lỗ hổng bảo mật - Ảnh 1.

EchoLeak and the Potential Risks of Artificial Intelligence

As artificial intelligence (AI) becomes a part of everything from writing reports, responding to emails to analyzing data, it seems like we are living in an era of unprecedented convenience. But the downside of convenience is also starting to become apparent, especially when it comes to security.

A recent security vulnerability dubbed EchoLeak has left users of the Microsoft Copilot service at risk of having their sensitive data leaked without them taking any action.

When AI becomes a security vulnerability

According to Tuoi Tre Online 's research, EchoLeak is a newly recorded security vulnerability with the code CVE-2025-32711, which experts have rated as dangerous at 9.3/10 according to NIST's scale.

What worries security experts is that it is "zero-click" in nature: attackers can exploit data from Copilot without the user having to click, open a file, or even know anything is going on.

This is no simple bug. The research team at Aim Labs, which discovered the flaw, believes EchoLeak reflects a common design flaw in agent- and RAG-based AI systems. Because Copilot is part of the Microsoft 365 suite of apps that holds the emails, documents, spreadsheets, and meeting schedules of millions of users, the potential for data leakage is particularly serious.

The problem lies not just in the specific code, but in the way large language models (LLMs) operate. AIs need a lot of context to respond accurately, and so they are given access to a lot of background data. Without clear controls on the input and output, AIs can be “driven” in ways that users are not aware of. This creates a new kind of “backdoor” that is not due to a flaw in the code, but because AIs behave outside of human understanding.

Microsoft quickly released a patch, and no real-world damage has been reported so far. But the lesson from EchoLeak is clear: When AI is deeply integrated into working systems, even small errors in how it understands context can have major security consequences.

The more convenient AI becomes, the more fragile personal data becomes

The EchoLeak incident raises a troubling question: Are people so trusting of AI that they don’t realize that they can be tracked or have their personal information exposed with just a text message? A newly discovered vulnerability that allows hackers to silently extract data without users having to press any buttons is something that was once only seen in science fiction movies, but is now a reality.

While AI applications are increasingly popular, from virtual assistants like Copilot, chatbots in banking,education , to AI platforms that write content and process emails, most people are not warned about how their data is processed and stored.

“Chatting” with an AI system is no longer just about asking a few questions for convenience, but can also inadvertently reveal your location, habits, emotions, or even account information.

In Vietnam, many people are familiar with using AI on phones and computers without basic knowledge of digital security . Many people share private information with AI because they believe that "it's just a machine". But in reality, behind it is a system that can record, learn and transmit data to other places, especially when the AI ​​platform comes from a third party and has not been clearly tested for security.

To limit risks, users do not necessarily have to give up technology, but need to be more aware: they should carefully check whether the AI ​​application they are using has a reliable source, whether the data is encrypted, and especially do not share sensitive information such as ID numbers, bank accounts, health information... with any AI system without being clearly warned.

Just like when the internet was first born, AI also needs time to perfect and during that time, users should be the first to proactively protect themselves.

Are you ever sharing too much with AI?

When typing in a command like “rewrite this report for me in a more friendly way” or “summarize yesterday’s meeting”, many people don’t think that all the information they enter, including internal details, personal feelings, and work habits, can be recorded by AI. We have become so accustomed to chatting with smart tools that we forget the line between convenience and privacy.

Back to topic
TUAN VI

Source: https://tuoitre.vn/lo-hong-bao-mat-trong-microsoft-copilot-canh-bao-moi-ve-nguy-co-ro-ri-du-lieu-tu-ai-20250620103045262.htm


Comment (0)

No data
No data

Same tag

Same category

Braised Pig's Feet with Fake Dog Meat - A Special Dish of Northern People
Peaceful mornings on the S-shaped strip of land
Fireworks explode, tourism accelerates, Da Nang scores in summer 2025
Experience night squid fishing and starfish watching in Phu Quoc pearl island

Same author

Heritage

Figure

Enterprise

No videos available

News

Political System

Destination

Product