OpenTelemetry LLM Tracing with Vercel AI SDK and Pydantic Logfire
Company Updates

OpenTelemetry LLM Tracing with Vercel AI SDK and Pydantic Logfire

pydantic.dev10d ago

Vercel has done some genuinely nice work with OpenTelemetry. Next.js ships with built-in OTel instrumentation for route handlers, server components, and fetch calls. The package makes the setup a one-liner, and it handles both Node.js and Edge runtimes. You get request-level visibility into your Next.js app without writing any instrumentation code.

But the fascinating part is the AI SDK. Enable on a or call, and the SDK emits rich OTel spans with the full prompt, the model's response, token counts, streaming latency, and tool call details. It follows the OpenTelemetry Semantic Conventions for GenAI ( attributes) alongside a richer set of AI SDK-specific ones (). That's a lot of useful data, just sitting there waiting for a backend to pick it up.

On our end, Pydantic Logfire is built around these same conventions. When GenAI spans come in from the Vercel AI SDK, Pydantic AI, OpenAI instrumentation, LangChain, or anything else that follows the standard, the LLM Panel picks them up and renders them as readable conversations with token usage, cost, and latency metrics. No integration to configure. Point your OTel exporter at Logfire, and the data lands in the right views.

Originally published by pydantic.dev

Read original source →
Vercel