Google is facing criticism after refusing to fix a serious vulnerability in Gemini, an artificial intelligence (AI) tool deeply integrated into the Google Workspace ecosystem. According to security experts, the bug could lead to sensitive user information being leaked to strangers through an attack called “ASCII smuggling”.

Gemini vulnerable to code hidden in text
Security researcher Viktor Markopoulos tested several popular large language models (LLMs) for their resistance to this attack. The results showed that Gemini, DeepSeek, and Grok were all vulnerable, while ChatGPT, Claude, and Copilot were better protected.
ASCII smuggling is a form of attack where an attacker hides a prompt or command code in text, often in extremely small size or specially encoded. When a user asks an AI to summarize or analyze the content containing this hidden code, the AI tool will accidentally read and execute the malicious prompt without the user knowing.

Gemini 3 attracts attention before release
Risk of sensitive data leak in Google Workspace
With Gemini being directly integrated into Gmail, Docs, and Google Workspace services, this risk becomes particularly worrying: A hidden prompt could cause the AI to automatically search for personal information, contact data, or confidential documents, then send them out of the system without the user knowing.
Markopoulos contacted Google to present his findings, even illustrating with an invisible piece of code that prompted Gemini to share a link to a malicious website just to “redeem a discounted phone.”
Google denies this is a security flaw
In a published response, Google said that this was not a security vulnerability, but a form of social engineering attack. In other words, the company considered this a bug caused by careless user interaction, not a problem with its AI system.
This response suggests that Google has no plans to release a patch for Gemini in the near future, which means users should be more cautious when using AI to process documents, especially those containing sensitive information or coming from unverified sources.
Growing concerns about the safety of deeply integrated AI
The incident raises questions about the responsibility of tech companies as AI becomes increasingly integrated into work and communication platforms. While Google has emphasized its commitment to “user safety,” the decision could affect confidence in Gemini’s security – especially in the corporate environment.
According to Android Authority
Source: https://baovanhoa.vn/nhip-song-so/google-gay-tranh-cai-khi-tu-choi-va-loi-bao-mat-gemini-173381.html
Comment (0)