Datadog's Security team built AI Guard, an inline guardrail application that detects and blocks unsafe LLM behavior—such as prompt injection, data leaks, and unsafe code execution—in real time for Bits AI Agents. The team used Datadog LLM Observability to instrument agent workflows, build statistically valid evaluation datasets
Table of contents
Building a real-time system for runtime protectionCreating a statistically valid datasetInstrumenting evaluation and agent workflows with LLM ObservabilityEvaluating detection accuracyExpanding model coverageMonitoring in productionAccelerating investigations with automatic detections and trace-level visibilitySort: