![]() |
The GPT-5.4-Cyber model helps experts test for vulnerabilities in security systems. Photo: Bloomberg . |
OpenAI has just announced the start of deployment of GPT-5.4-Cyber, a special version of GPT-5.4 optimized for security. This move comes exactly one week after Anthropic released Mythos Preview, a security AI model whose release was restricted due to its perceived vulnerability in the wrong hands.
"We are optimizing our models to serve the security use case, starting today with a variant of GPT-5.4 trained with a cybersecurity focus," OpenAI said in an official announcement.
GPT-5.4-Cyber access was granted to members of the Trusted Access for Cyber program, which launched in February with a $10 million API credit grant. Initially, hundreds of users gained access, which later expanded to thousands of security professionals.
The model's standout feature is its binary reverse engineering, which allows for the analysis of compiled software to find malware and vulnerabilities without needing the original source code. OpenAI states that Codex Security, its earlier-launched security product, has contributed to patching over 3,000 critical security vulnerabilities since expanding its deployment.
OpenAI's strategy differs significantly from Anthropic's. While its competitor only grants access to Mythos to around 40 carefully selected organizations, OpenAI opts for a gradual expansion approach based on identity verification and oversight.
"This is a team sport , and we need to ensure every team is empowered to protect its systems. No one should be in a position to choose who wins or loses in cybersecurity," said Fouad Matin, a security researcher at OpenAI.
Technically, OpenAI believes that current safeguards are sufficient to mitigate the risks of widespread deployment of conventional models. However, the model has been refined to provide a higher level of system security.
The race between the two companies reflects a larger debate in the security industry about how to handle AI capable of attacking systems. One side argues that tight restrictions, like those Anthropic imposed on Mythos, would give defenders an advantage over attackers. The other side, particularly proponents of open-source software, argues that the world would be safer if all defenders had access to the technology, rather than just a select group chosen by large corporations.
Source: https://znews.vn/openai-dap-tra-anthropic-post1643820.html







Comment (0)