Mozilla, the nonprofit behind the Firefox browser, reported that its early access to Anthropic's unreleased Claude Mythos Preview model led to the discovery of hundreds of security flaws. The company fixed 423 bugs in April, a dramatic increase from 25 in January. One vulnerability had persisted for 20 years, undetected by conventional security-testing tools known as fuzzers.
This breakthrough marks a significant leap in AI's ability to identify software weaknesses. Mozilla previously found that earlier AI systems produced unreliable results, which it termed "slop." The new model's success suggests a maturation of large language models for cybersecurity applications, where precision is critical.
Mozilla detailed 12 specific bugs uncovered through the AI, including 271 issues directly attributed to the AI-assisted effort. The company shipped all 423 fixes in Firefox's April releases. Prior to using Mythos, Mozilla had struggled to achieve such comprehensive coverage with standard fuzzing tools.
The development signals a shift in how organizations may approach vulnerability hunting. If broadly deployed, such AI systems could dramatically reduce the window between a bug's introduction and its discovery. However, access remains limited to a handful of users, raising questions about equitable security benefits.
Experts caution that AI-assisted bug hunting is still nascent. Critics argue reliance on proprietary models creates dependency risks, and the true effectiveness compared to traditional methods remains unverified across diverse codebases.