Sentry's LLM monitoring feature has built-in support for Python and JavaScript but not PHP. This guide demonstrates how to manually implement AI tracing for OpenAI calls in Laravel by wrapping API requests with Sentry spans using documented `gen_ai.*` semantic conventions. The implementation tracks token usage, model versions, latency, and finish reasons, enabling cost tracking and performance analysis. The pattern includes graceful degradation for contexts without active transactions, proper span hierarchy management, and extends to other LLM providers.

4m read timeFrom joshhornby.com
Post cover image
Table of contents
ImplementationWhat shows up in SentryCost trackingExtending it

Sort: