A recap of an OpenSSF Tech Talk on securing agentic AI systems, featuring experts from Microsoft, Thread AI, and Canonical. Key topics include why AI agents require a new threat model due to their non-deterministic nature, the introduction of SAFE MCP (a MITRE ATT&CK-inspired catalog of 80+ attack techniques targeting tool-based LLMs), and a seven-layer security stack for AI infrastructure. The talk emphasizes least-privilege as an architectural requirement, the risks of prompt injection and confused deputy problems, and the need for SBOM visibility across the 3,000+ open source dependencies in a typical AI stack.

4m read timeFrom openssf.org
Post cover image
Table of contents
The New Threat Model: Why Agents DifferIntroducing SAFE MCP: A Threat Catalog for the AI EraThe “Seven-Layer Cake” of AI InfrastructureHow to Get Involved

Sort: