Google's NotebookML is vulnerable to Prompt Injection and data exfiltration, allowing attackers to manipulate chat conversations, render images, and exfiltrate data. Mitigations include not rendering images or clickable hyperlinks to arbitrary domains.
Table of contents
Bobby Tables and Prompt InjectionDemo SetupDemo VideoSeverity - Reading Data From Other DocumentsResponsible DisclosureMitigationsRecommendations for users of NotebookMLConclusionsSort: