OpenAI, creator of artificial intelligence (AI)-powered chatbot ChatGPT, has confirmed that a bug in the chatbot’s source code may have caused a data leak.
According to OpenAI, a vulnerability in the Redis open-source library used by ChatGPT meant that “some users” were able to see “titles from another active user’s chat history” as well as potentially being able to view the first message of a new conversation if both users were active at the same time.
The company also admitted that the bug may have caused the “unintentional visibility of payment-related information” for premium ChatGPT users that were active between 1-10am PST on March 20. This meant that the names, email addresses, payment addresses, credit card type and last four digits of the payment card number of active users may have been visible to other users during this period. No full payment card information was visible at any time. OpenAI said it believes the number of users who were affected by this bug to be “extremely low”.
On March 24, OpenAI confirmed that they had taken ChatGPT down once the bug was discovered and had patched it on the same day. OpenAI said it would contact all those affected by the data leak.
On April 11, OpenAI announced that it would be partnering with bug bounty platform Bugcrowd to launch a bug bounty program. The program is, according to OpenAI, part of its “commitment to secure AI” and to “recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure”.
Through the bug bounty program, individuals will be able to report any security flaws, vulnerabilities or bugs found within its systems for a monetary reward. The monetary rewards range from US$200 for “low-severity” findings to $20,000 for “exceptional discoveries”.