Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Max Payout for Bug Bounty Program Up From $20,000 to $100,000

OpenAI announced a cybersecurity initiative that aims to improve the resilience of its large language models by rewarding the discovery of critical vulnerabilities and improving threat mitigation.
See Also: Capturing the cybersecurity dividend
The Sam Altman-led company raised the maximum payout for its bug bounty program from $20,000 to $100,000 to attract researchers. Launched last April, the program will offer the six-figure sum for “exceptional and differentiated critical findings.” It also introduced a limited-time bonus promotion for reports submitted in specific categories, doubling certain payouts to up to $13,000 for priority access control vulnerabilities. The promotion covering “priority 1-3 IDOR access control vulnerabilities on any in-scope target” started on March 26 and runs through April 30. The promotion also increases the original IDOR bounty range from a minimum of $200 to $400 and a maximum of $6,500 to $13,000.
The company expanded its 2023-launched Cybersecurity Grant Program as well, which has funded 28 research projects addressing threats, including prompt injection attacks, secure code generation and autonomous cybersecurity defenses. The program is now seeking proposals on new topics, including software patching, model privacy, threat detection and response, security integration and the resilience of AI agents against sophisticated attacks. Researchers will also have access to microgrants through API credits to support their security research efforts.
The AI giant has entered a red team partnership with cybersecurity firm SpecterOps to simulate adversarial attacks across OpenAI’s corporate, cloud and production environments to identify weaknesses before malicious actors can exploit them. The continuous testing is expected to provide critical insights into securing AI systems, helping identify vulnerabilities and fortify defenses against threats like prompt injection and other malicious manipulations.
The company said it is working with academic, government and commercial researchers to improve AI’s capability to identify and patch vulnerabilities in software. As findings emerge, OpenAI says it will share disclosures with relevant open-source communities to enhance broader cybersecurity resilience.