OpenAI has launched a new bug bounty program specifically targeting abuse and safety risks in its artificial intelligence systems. The program focuses on identifying design or implementation vulnerabilities that could lead to material harm, expanding beyond traditional cybersecurity flaws to address AI-specific threats.

The scope of this bug bounty program appears to prioritize safety-related vulnerabilities rather than conventional technical security issues. By targeting design and implementation flaws that could cause material harm, OpenAI is addressing concerns about AI misuse, harmful outputs, and safety failures that could impact users or society more broadly.