OpenAI announced Thursday it is rolling out a more permissible version of GPT-5.5, dubbed "Spud," to vetted cyber defenders. The move opens a limited preview of GPT-5.5-Cyber to those responsible for securing critical infrastructure.

The decision comes amid recent security testing showing the model is nearly as adept at finding and exploiting software bugs as Anthropic's Mythos Preview. This capability has ignited urgent debate in Silicon Valley and the White House about preventing misuse by malicious actors.

A source familiar with GPT-5.5-Cyber's abilities told Axios the model performs roughly on par with Mythos, though one major test put the competitor narrowly ahead. Defenders approved for OpenAI's highest Trusted Access for Cyber tier will receive a version with fewer guardrails than the public model.

These users can deploy the model to hunt for bugs, study malware, and reverse engineer attacks. OpenAI still blocks certain malicious tasks, such as credential theft and writing malware, but the enhanced access represents a significant expansion of defensive cyber capabilities.

The implications are twofold: critical infrastructure gains a powerful ally against threats, but each advancement also arms potential adversaries with new benchmarks. The White House debate underscores the delicate balance between empowering defenders and limiting exposure to offensive uses.

While the model strengthens cyber defense, critics argue that even restricted releases risk leakage or misuse. The approach assumes trusted actors can be reliably vetted, an assumption some security experts question given sophisticated threats.