A conference talk by Jason Haddix covering offensive security methodology for AI systems. Drawing on 2+ years of AI pentesting experience, it walks through a custom LLM assessment methodology covering input identification, ecosystem attacks, prompt engineering leakage, RAG data exfiltration, and pivoting to internal systems.

55m watch time

Sort: