OpenAI unveiled a new strategy on Tuesday to expand access to AI models with advanced cybersecurity capabilities. The move coincides with the release of GPT-5.4-Cyber, a variant designed specifically for defensive security tasks. The company plans to make these tools more widely available to vetted users through its Trusted Access for Cyber program.
This represents a significant shift in how the firm approaches AI security risks. Rather than restricting what models can do, the focus is now on verifying who gets access to the most sensitive capabilities. The approach aims to balance widespread availability with safeguards against potential misuse.
The program will expand access to thousands of individuals and hundreds of security teams, according to OpenAI's announcement. All participants must complete identity verification checks and will be subject to monitoring systems. This contrasts with Anthropic's more restrictive rollout of its Mythos Preview model, which is reportedly available to only about 40 organizations.
OpenAI's strategy reflects growing industry debate about how to deploy powerful AI tools safely. The company says it wants to make defensive cybersecurity capabilities "as widely available as possible while preventing misuse." This could accelerate adoption of AI-assisted security measures across more organizations.
The expansion comes as hacking risks continue to grow globally. Making advanced defensive tools more accessible could help smaller security teams compete with well-resourced threat actors. However, success depends heavily on the effectiveness of verification and monitoring systems.