top of page
Search

OpenAI Launches Bug Bounty Program to Ensure System Security



OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to $20,000 for exceptional discoveries." It's worth noting that the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that "addressing these issues often involves substantial research and a broader approach." Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information. "Please note that authorized testing does not exempt you from all of OpenAI's terms of service," the company cautioned. "Abusing the service may result in rate limiting, blocking, or banning." What's in scope, however, are defects in OpenAI APIs, ChatGPT (including plugins), third-party integrations, public exposure of OpenAI API keys, and any of the domains operated by the company. The development comes in response to OpenAI patching account takeover and data exposure flaws in the platform, prompting Italian data protection regulators to take a closer look at the platform. OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to $20,000 for exceptional discoveries." It's worth noting that the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that "addressing these issues often involves substantial research and a broader approach." Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information. "Please note that authorized testing does not exempt you from all of OpenAI's terms of service," the company cautioned. "Abusing the service may result in rate limiting, blocking, or banning." What's in scope, however, are defects in OpenAI APIs, ChatGPT (including plugins), third-party integrations, public exposure of OpenAI API keys, and any of the domains operated by the company. The development comes in response to OpenAI patching account takeover and data exposure flaws in the platform, prompting Italian data protection regulators to take a closer look at the platform.

Comments


bottom of page