OpenRouter's Broadcast feature automatically sends OpenTelemetry traces from every LLM API request to Grafana Cloud, requiring no code changes or SDK installation. Traces include model info, token usage, cost data, latency breakdowns, and error details following OTel semantic conventions for generative AI. Teams can use TraceQL to query traces, build dashboards for cost attribution across models, monitor p50/p95/p99 latency, debug failed requests, and plan capacity. A Privacy Mode excludes prompt/completion content while retaining operational metrics. Setup takes a few minutes via the OpenRouter dashboard.

8m read timeFrom grafana.com
Post cover image
Table of contents
Why LLM observability is differentHow OpenRouter Broadcast works with Grafana Cloud

Sort: