Security researchers demonstrate a novel attack technique where malicious webpages use LLM APIs to dynamically generate phishing JavaScript at runtime. Attackers craft prompts that bypass AI safety guardrails, causing LLMs to return malicious code snippets that are assembled and executed in the victim's browser. This creates
Table of contents
Executive SummaryLLM-Augmented Runtime Assembly Attack ModelGeneralizing the Threat and Expanding the Attack SurfaceRecommendations for DefendersConclusionAdditional ResourcesSort: