Security researchers demonstrate a novel attack technique where malicious webpages use LLM APIs to dynamically generate phishing JavaScript at runtime. Attackers craft prompts that bypass AI safety guardrails, causing LLMs to return malicious code snippets that are assembled and executed in the victim's browser. This creates

12m read time From unit42.paloaltonetworks.com
Post cover image
Table of contents
Executive SummaryLLM-Augmented Runtime Assembly Attack ModelGeneralizing the Threat and Expanding the Attack SurfaceRecommendations for DefendersConclusionAdditional Resources

Sort: