Check Point Research discovered a vulnerability in ChatGPT's code execution runtime that allowed data exfiltration via a DNS-based side channel. Because the sandboxed Python environment blocked direct outbound internet access but still permitted DNS resolution, a malicious prompt could encode sensitive conversation data into DNS subdomain queries and transmit it to an attacker-controlled server — silently, without any user-visible warning or confirmation dialog. The same channel could also be used to establish a remote shell inside the execution container. The attack could be delivered via a crafted prompt shared as a 'productivity tip' or embedded directly in a malicious custom GPT. A proof-of-concept demonstrated leaking patient identity and medical assessments from an uploaded lab PDF. OpenAI confirmed the fix was fully deployed on February 20, 2026.

10m read timeFrom research.checkpoint.com
Post cover image
Table of contents
Key TakeawaysWhat HappenedThe Intended SafeguardsFrom One Message to Silent ExfiltrationMalicious GPTsFrom Data Exfiltration to Remote ShellDNS Tunneling in an AI RuntimeConclusion

Sort: