A conference talk demonstrating real security vulnerabilities in LLM-powered applications. Live demos show how path traversal and SQL injection can be used to poison RAG context and chat memory, enabling unauthorized actions like canceling bookings or dropping database tables. The talk covers prompt injection techniques including multi-stage attacks that bypass system message restrictions to extract PII. Mitigations discussed include input/output guardrails using LLM-as-a-judge, limiting LLM permissions and tool scope, enforcing structured output, human-in-the-loop confirmation for high-risk actions, and using local models for privacy-sensitive data. MCP server risks and denial-of-pocket attacks are also highlighted.
•46m watch time
Sort: