Standard OpenTelemetry GenAI instrumentation leaves three gaps for LLM applications: no conversation threading across multi-turn sessions, no cost tracking, and prompts/completions routed through the log pipeline rather than span attributes. last9-genai is an OTel extension SDK that fills these gaps via two custom processors. Last9LogToSpanProcessor bridges log records from opentelemetry-instrumentation-openai-v2 back onto active spans so dashboards can read prompt content. Last9SpanProcessor uses Python contextvars to propagate conversation_id, workflow_id, and agent metadata across all spans in a session, and computes per-call USD cost from token counts and custom pricing. The SDK also provides context managers (conversation_context, workflow_context, agent_context) and an @observe decorator for non-auto-instrumented clients like Anthropic. A single install() call handles the six-object wiring that is otherwise a common source of silent failures.
Table of contents
The mismatch nobody warns you aboutThree gaps in standard OTel GenAI instrumentationArchitecture: how last9-genai extends OTelThe log-to-span bridgecontextvars-based propagationThe on_start / on_end immutability constraintThe install() APIUse casesSpan attributes referenceWhat it does not do (yet)Getting startedSummarySort: