Agentic AI
,
Artificial Intelligence & Machine Learning
,
Data Security
Attackers Could Siphon Gmail Data Unnoticed From Users Who Let AI Tool Access Email
OpenAI patched a vulnerability in ChatGPT’s Deep Research agent that could have enabled hackers to extract Gmail data without the user’s knowledge. The flaw affected subscribers who authorized the artificial intelligence tool to access their Gmail accounts, Radware security researchers said.
See Also: OnDemand | Transform API Security with Unmatched Discovery and Defense
Deep Research is designed to conduct complex online research and answer detailed questions on behalf of users and integrate itself with multiple accounts and services, if permitted.
Radware researchers found the vulnerability by sending an email containing hidden instructions to themselves. When Deep Research accessed the email, it followed the embedded directives to search for personal information such as full names and addresses, and then transmitted that data to a web address controlled by the researchers. The process required no clicks from the user, meaning sensitive data could have left accounts unnoticed.
Radware’s director of threat research Pascal Geenens reportedly said that “if a corporate account was compromised, the company wouldn’t even know information was leaving.”
The vulnerability dubbed ShadowLeak is the first service-side leaking, zero-click indirect prompt injection, which means data is exfiltrated directly from OpenAI’s infrastructure rather than through the client device. Previously disclosed vulnerabilities used image rendering in the client user interface of the chat agent.
“There is no trace of a web call or data leaking through the affected organization’s boundary. There is no visibility or traceability. Organizations are blind and cannot detect the leak,” he told Information Security Media Group. After data might have been exposed, organizations have no logs that companies could use to estimate the size and the scope of the compromised data, Geenens said.
The exploit relied on the technique of prompt injection in which attackers embed instructions that manipulate an AI agent into performing unauthorized actions. ShadowLeak is unusual because it targeted the AI agent itself rather than the user’s device. This enabled data to be exfiltrated directly from OpenAI’s servers without leaving obvious traces, making detection difficult.
The researchers said that prompt injections exploit the very autonomy AI agents are designed to provide. Deep Research and similar tools can access emails, calendars and cloud services to perform tasks with minimal human oversight. These capabilities are marketed as time-saving, but can be misused if malicious instructions are concealed in otherwise innocuous content, such as emails formatted with hidden or invisible text.
Radware described the development of ShadowLeak as a methodical process. “This process was a rollercoaster of failed attempts, frustrating roadblocks and, finally, a breakthrough,” the researchers said. Unlike many prompt injections, ShadowLeak executed entirely within OpenAI’s infrastructure, avoiding endpoints where conventional security measures operate.
Deep Research is available to ChatGPT subscribers for an additional fee and represents a broader trend of deploying AI agents to act autonomously. Beyond Gmail, Radware said other services integrated with Deep Research include Microsoft Outlook, GitHub, Google Drive and Dropbox and could be susceptible to similar exploits. “The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes or customer records,” researchers said.
“It is easy to get a ChatGPT subscription and use it from a web browser running from a managed device that has access to a lot of sensitive data,” Geenens said. Organizations that did not provide corporate file sharing options see their employees using different free tiers of cloud file sharing providers and were unable to control or see which data was shared outside of the organization, he said.
OpenAI addressed the vulnerability earlier this month. A spokesperson reportedly told Bloomberg that “researchers often test these systems in adversarial ways, and we welcome their research as it helps us improve.”
Radware did not find evidence that the flaw was exploited outside its controlled testing.
Geenens outlined enterprise considerations in governance and visibility when evaluating AI assistants. Companies must be aware of what data the AI is allowed to access and what information could potentially be exposed. They must also know what external sources the AI connected to and which of those could be abused forprompt injection attacks. Every interaction – both prompts and responses – should also be logged and subject to inspection. “Only through these records can potential leaks be identified and the scope of compromised data assessed,” he said.