The DockerDash security vulnerability is a critical Meta-Context Injection flaw affecting the Ask Gordon AI assistant in Docker Desktop and Docker CLI. This vulnerability allows attackers to execute unauthorized code and steal sensitive data through Docker image metadata without authentication.
Researchers at Noma Labs discovered this vulnerability, and Docker officially released a patch in version 4.50.0 in November 2025.

Exploitation mechanism via Meta-Context Injection
According to Noma Security, the root cause of DockerDash lies in Ask Gordon's treatment of unverified metadata as valid commands. An attacker can create a Docker image containing malicious commands embedded in the LABEL field of a Dockerfile. When a user questions Ask Gordon about this image, the AI analyzes and interprets the malicious directive as a normal control command.
Ask Gordon then forwards this content to the MCP Gateway (Model Context Protocol). Because the Gateway cannot distinguish between descriptive labels and internal commands, it executes the code through the MCP tools with the user's administrative privileges without requiring any additional authentication steps.
Risks of code execution and system data leaks.
The DockerDash attack is particularly dangerous because it leverages Docker's existing architecture. In addition to remote code execution, attackers can exploit AI assistants to collect sensitive data within the Docker Desktop environment. The exposed information could include container details, system configurations, mounted directories, and internal network architecture.
Notably, version 4.50.0 not only patches DockerDash but also fixes another prompt injection vulnerability discovered by Pillar Security. This vulnerability previously allowed attackers to gain control of AI through repository metadata on Docker Hub.
Zero-Trust security and authentication recommendations
Sasi Levi, an expert from Noma Security, believes DockerDash serves as a warning about the risks of AI supply chains. Input sources that were previously considered completely trustworthy can be exploited to manipulate the execution flow of large model language (LLM).
To minimize risks, users should update Docker Desktop to the latest version immediately. Experts recommend that applying zero-trust validation to all contextual data provided to AI models is mandatory to ensure system security.
Source: https://baonghean.vn/docker-khac-phuc-lo-hong-dockerdash-de-doa-tro-ly-ai-ask-gordon-10322463.html







Comment (0)