A critical zero-click vulnerability, named ShadowLeak, has been discovered in OpenAI’s ChatGPT Deep Research agent, allowing attackers to extract sensitive Gmail data without any user interaction. By embedding hidden instructions in a seemingly normal email using techniques like white-on-white text, threat actors can manipulate the AI agent to collect and exfiltrate personal information directly from the victim’s inbox. This attack occurs within OpenAI’s cloud infrastructure, making it invisible to local or enterprise security tools.
The exploit leverages indirect prompt injection, where the AI reads and obeys concealed commands while processing emails for research. Radware researchers demonstrated that data could be encoded in Base64 and transmitted via a browser function to an external server. Although the proof-of-concept focused on Gmail integration, the method could extend to other connected services like Outlook, Dropbox, or SharePoint, broadening the potential impact.
This vulnerability highlights the risks of AI agents processing unstructured or untrusted data without adequate safeguards. Unlike client-side attacks, ShadowLeak operates entirely in the cloud, bypassing conventional defenses. In a related development, researchers also showed that ChatGPT could be tricked into solving CAPTCHAs by reframing them as “fake” tests, further illustrating the need for robust AI security measures, including context integrity and continuous testing.

