Security researchers discovered ShadowLeak, a zero-click vulnerability in ChatGPT's Deep Research agent that allows attackers to steal personal data through malicious HTML emails. The attack uses hidden prompt injection instructions to make the AI agent extract PII from Gmail and send it to attacker-controlled servers. OpenAI has since patched the vulnerability, but similar attacks could target other AI agents with external tool access.

3m read timeFrom aicyberinsights.com
Post cover image

Sort: