AI-Driven Security Operations
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
OpenAI Unveils GPT‑5.4‑Cyber in Pointed Rejoinder to Anthropic

OpenAI unveiled Tuesday its answer to artificial intelligence rival Anthropic’s much-touted private release of a cybersecurity model by announcing the broader availability of GPT‑5.4‑Cyber.
See Also: From Visibility to Action: Modernizing Security Operations with Cisco, Optiv, and Splunk
In a pointed announcement, OpenAI said its intent is to make tools that can identify vulnerabilities “as widely available as possible.”
Internal safeguards, know-your-customer verification and “trust signals” will safeguard the world from misuse, the company asserted. “We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves. Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability,” it wrote.
The company’s approach is an illustration in contrasts with Anthropic, which earlier this month announced it created a private consortium of hand-picked companies to receive access to its Mythos model, a variation of its Claude large language model Anthropic said has already found thousands of high-severity vulnerabilities, “including some in every major operating system and web browser” (see: Anthropic Calls Its New Model Too Dangerous to Release).
OpenAI said it, too, is worried about the potential for misuse and in fact will start allowing access to the new model – a chatbot “purposely fine-tuned for additional cyber capabilities” – to vetted security vendors, organizations, and researchers. But access to the model won’t be restricted to a pre-selected coalition, OpenAI averred. Interested parties can ask to join its “Trusted Access for Cyber” program, an effort it announced in February.
The company’s three principles – democratized access, iterative deployment and ecosystem resilience – present the model as a logical next step rather than a radical jump in tool capability. It is framing distinct from Anthropic’s, which invoked elevated risks to “economies, public safety and national security” when it announced Project Glasswing.
“We believe the class of safeguards in use today sufficiently reduce cyber risk enough to support broad deployment of current models,” it said. “We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t.”
GPT‑5.4‑Cyber is the result of months of work, the company also said. It emphasized the iterative nature of the model’s development, and how it will operate in the world. “As we better understand both their capabilities and risks, we update our models and safety systems accordingly.”
