![]() |
The new AI model Anthropic is capable of finding any security vulnerability. Photo: Bloomberg . |
Anthropic recently announced a new AI model called Claude Mythos Preview but declined to release it widely due to concerns that it could be misused to attack critical infrastructure. Instead, the company allowed more than 40 major technology partners, including Apple, Amazon, Microsoft, and Google, access to the model to find and patch software security vulnerabilities.
This alliance, called Project Glasswing, includes hardware vendors such as Cisco and Broadcom, along with the Linux Foundation, an organization that maintains much of the important open-source software. Anthropic has pledged to contribute up to $100 million in Claude-based credits to this effort.
New AI Supermodel
What makes Claude Mythos Preview particularly concerning is its ability to automatically detect zero-day vulnerabilities—software flaws that even the developers are unaware of. Previously, these types of vulnerabilities were typically found after months of research by leading security experts. However, Anthropic's latest AI model can do this on a large scale, automatically, and continuously.
![]() |
Claude Mythos stunned the cybersecurity community. Photo: Bloomberg . |
According to Anthropic, Claude Mythos discovered thousands of flaws in popular operating systems and browsers. One of them was a vulnerability that had existed for 27 years in OpenBSD, an open-source operating system specifically designed for security and integrated into many routers. Another vulnerability was in a popular video software that automated scans had failed to detect 5 million times.
"The Claude Mythos model found vulnerabilities that security researchers had overlooked for decades," said Logan Graham, head of the model's vulnerability testing team at Anthropic.
Elia Zaitsev, Chief Technology Officer of CrowdStrike, believes this model poses a global cybersecurity risk.
"Things that used to take months now only take minutes thanks to AI," Zaitsev said. He also warned that competitors would try to exploit similar capabilities thanks to Claude Mythos.
Potential risks
Nikesh Arora, CEO of Palo Alto Networks, described Claude Mythos as posing an unpredictable threat.
"Imagine it's like a swarm of agents constantly and meticulously cataloging every weakness in your technology infrastructure," Nikesh Arora observed.
Jared Kaplan, Chief Scientific Officer of Anthropic, explains that Claude Mythos's security capabilities are not the result of special training. It's a natural consequence of the model's excellent programming skills and ability to automatically correct errors over time. He predicts that other AI models will soon have similar capabilities, escalating the race between hackers and security teams to a new level.
![]() |
Claude Mythos is unlikely to be widely released. Photo: Bloomberg . |
The decision to retain the security model has a precedent from 2019, when OpenAI refused to release GPT-2 due to concerns that it could be used to automate the production of disinformation. Those who led the GPT-2 project later left OpenAI and founded Anthropic.
This announcement comes a day after Anthropic announced projected annual revenue growth of more than threefold in 2026, from $9 billion to over $30 billion . Much of this growth comes from demand for Claude as a programming tool. This superior capability is what enables the model to find security vulnerabilities in ways never seen before.
Anthropic is in a very contradictory position in the current AI landscape. The company is simultaneously racing to develop powerful systems to generate revenue and constantly having to warn about the risks of the very technology it creates.
Source: https://znews.vn/anthropic-lai-gay-chan-dong-post1641929.html









Comment (0)