OpenAI has patched a vulnerability in ChatGPT that allowed attackers to exfiltrate sensitive user data through malicious prompts, according to findings from Check Point researchers. The flaw enabled the extraction of conversation data, uploaded files, and other sensitive content without user knowledge or consent.
The vulnerability allowed a single malicious prompt to transform ordinary ChatGPT conversations into covert data extraction channels. According to Check Point, the attack could compromise user messages and any files uploaded during chat sessions.