Telemetry

AI SDK Telemetry is experimental and may change in the future.

The AI SDK uses OpenTelemetry to collect telemetry data. OpenTelemetry is an open-source observability framework designed to provide standardized instrumentation for collecting telemetry data.

Check out the AI SDK Observability Integrations to see providers that offer monitoring and tracing for AI SDK applications.

Enabling telemetry

For Next.js applications, please follow the Next.js OpenTelemetry guide to enable telemetry first.

You can then use the experimental_telemetry option to enable telemetry on specific function calls while the feature is experimental:

const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: 'Write a short story about a cat.',
experimental_telemetry: { isEnabled: true },
});

When telemetry is enabled, you can also control if you want to record the input values and the output values for the function. By default, both are enabled. You can disable them by setting the recordInputs and recordOutputs options to false.

Disabling the recording of inputs and outputs can be useful for privacy, data transfer, and performance reasons. You might for example want to disable recording inputs if they contain sensitive information.

Telemetry Metadata

You can provide a functionId to identify the function that the telemetry data is for, and metadata to include additional information in the telemetry data.

const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: 'Write a short story about a cat.',
experimental_telemetry: {
isEnabled: true,
functionId: 'my-awesome-function',
metadata: {
something: 'custom',
someOtherThing: 'other-value',
},
},
});

Custom Tracer

You may provide a tracer which must return an OpenTelemetry Tracer. This is useful in situations where you want your traces to use a TracerProvider other than the one provided by the @opentelemetry/api singleton.

const tracerProvider = new NodeTracerProvider();
const result = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: 'Write a short story about a cat.',
experimental_telemetry: {
isEnabled: true,
tracer: tracerProvider.getTracer('ai'),
},
});

Telemetry Integrations

Telemetry integrations let you hook into the generation lifecycle to build custom observability — logging, analytics, DevTools, or any other monitoring system. Instead of wiring up individual callbacks on every call, you implement a TelemetryIntegration once and pass it via experimental_telemetry.integrations.

Using an integration

Pass one or more integrations to any generateText or streamText call:

import { streamText } from 'ai';
import { devToolsIntegration } from '@ai-sdk/devtools';
const result = streamText({
model: openai('gpt-4o'),
prompt: 'Hello!',
experimental_telemetry: {
isEnabled: true,
integrations: [devToolsIntegration()],
},
});

You can combine multiple integrations — they all receive the same lifecycle events:

experimental_telemetry: {
isEnabled: true,
integrations: [devToolsIntegration(), otelIntegration(), customLogger()],
},

Errors inside integrations are caught and do not break the generation flow.

Building a custom integration

Implement the TelemetryIntegration interface from the ai package. All methods are optional — implement only the lifecycle events you care about:

import type { TelemetryIntegration } from 'ai';
import { bindTelemetryIntegration } from 'ai';
class MyIntegration implements TelemetryIntegration {
async onStart(event) {
console.log('Generation started:', event.model.modelId);
}
async onStepFinish(event) {
console.log(
`Step ${event.stepNumber} done:`,
event.usage.totalTokens,
'tokens',
);
}
async onToolCallFinish(event) {
if (event.success) {
console.log(
`Tool "${event.toolCall.toolName}" took ${event.durationMs}ms`,
);
} else {
console.error(`Tool "${event.toolCall.toolName}" failed:`, event.error);
}
}
async onFinish(event) {
console.log('Done. Total tokens:', event.totalUsage.totalTokens);
}
}
export function myIntegration(): TelemetryIntegration {
return bindTelemetryIntegration(new MyIntegration());
}

Use bindTelemetryIntegration for class-based integrations to ensure this is correctly bound when methods are extracted and called as callbacks.

Available lifecycle methods

onStart:

(event: OnStartEvent) => void | PromiseLike<void>
Called when the generation operation begins, before any LLM calls.

onStepStart:

(event: OnStepStartEvent) => void | PromiseLike<void>
Called when a step (LLM call) begins, before the provider is called.

onToolCallStart:

(event: OnToolCallStartEvent) => void | PromiseLike<void>
Called when a tool's execute function is about to run.

onToolCallFinish:

(event: OnToolCallFinishEvent) => void | PromiseLike<void>
Called when a tool's execute function completes or errors.

onStepFinish:

(event: OnStepFinishEvent) => void | PromiseLike<void>
Called when a step (LLM call) completes.

onFinish:

(event: OnFinishEvent) => void | PromiseLike<void>
Called when the entire generation completes (all steps finished).

The event types for each method are the same as the corresponding event callbacks. See the event callbacks documentation for the full property reference of each event.

Collected Data

generateText function

generateText records 3 types of spans:

  • ai.generateText (span): the full length of the generateText call. It contains 1 or more ai.generateText.doGenerate spans. It contains the basic LLM span information and the following attributes:

    • operation.name: ai.generateText and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.generateText"
    • ai.prompt: the prompt that was used when calling generateText
    • ai.response.text: the text that was generated
    • ai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)
    • ai.response.finishReason: the reason why the generation finished
    • ai.settings.maxOutputTokens: the maximum number of output tokens that were set
  • ai.generateText.doGenerate (span): a provider doGenerate call. It can contain ai.toolCall spans. It contains the call LLM span information and the following attributes:

    • operation.name: ai.generateText.doGenerate and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.generateText.doGenerate"
    • ai.prompt.messages: the messages that were passed into the provider
    • ai.prompt.tools: array of stringified tool definitions. The tools can be of type function or provider-defined-client. Function tools have a name, description (optional), and inputSchema (JSON schema). Provider-defined-client tools have a name, id, and input (Record).
    • ai.prompt.toolChoice: the stringified tool choice setting (JSON). It has a type property (auto, none, required, tool), and if the type is tool, a toolName property with the specific tool.
    • ai.response.text: the text that was generated
    • ai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)
    • ai.response.finishReason: the reason why the generation finished
  • ai.toolCall (span): a tool call that is made as part of the generateText call. See Tool call spans for more details.

streamText function

streamText records 3 types of spans and 2 types of events:

  • ai.streamText (span): the full length of the streamText call. It contains a ai.streamText.doStream span. It contains the basic LLM span information and the following attributes:

    • operation.name: ai.streamText and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.streamText"
    • ai.prompt: the prompt that was used when calling streamText
    • ai.response.text: the text that was generated
    • ai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)
    • ai.response.finishReason: the reason why the generation finished
    • ai.settings.maxOutputTokens: the maximum number of output tokens that were set
  • ai.streamText.doStream (span): a provider doStream call. This span contains an ai.stream.firstChunk event and ai.toolCall spans. It contains the call LLM span information and the following attributes:

    • operation.name: ai.streamText.doStream and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.streamText.doStream"
    • ai.prompt.messages: the messages that were passed into the provider
    • ai.prompt.tools: array of stringified tool definitions. The tools can be of type function or provider-defined-client. Function tools have a name, description (optional), and inputSchema (JSON schema). Provider-defined-client tools have a name, id, and input (Record).
    • ai.prompt.toolChoice: the stringified tool choice setting (JSON). It has a type property (auto, none, required, tool), and if the type is tool, a toolName property with the specific tool.
    • ai.response.text: the text that was generated
    • ai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)
    • ai.response.msToFirstChunk: the time it took to receive the first chunk in milliseconds
    • ai.response.msToFinish: the time it took to receive the finish part of the LLM stream in milliseconds
    • ai.response.avgCompletionTokensPerSecond: the average number of completion tokens per second
    • ai.response.finishReason: the reason why the generation finished
  • ai.toolCall (span): a tool call that is made as part of the generateText call. See Tool call spans for more details.

  • ai.stream.firstChunk (event): an event that is emitted when the first chunk of the stream is received.

    • ai.response.msToFirstChunk: the time it took to receive the first chunk
  • ai.stream.finish (event): an event that is emitted when the finish part of the LLM stream is received.

It also records a ai.stream.firstChunk event when the first chunk of the stream is received.

Deprecated object APIs

generateObject and streamObject are deprecated. Use generateText and streamText with the output property instead.

If you still run deprecated object APIs, you will see legacy span names:

  • generateObject: ai.generateObject, ai.generateObject.doGenerate
  • streamObject: ai.streamObject, ai.streamObject.doStream, ai.stream.firstChunk

Legacy object spans include the same core metadata as other LLM spans, plus object-specific attributes such as ai.schema.*, ai.response.object, and ai.settings.output.

embed function

embed records 2 types of spans:

  • ai.embed (span): the full length of the embed call. It contains 1 ai.embed.doEmbed spans. It contains the basic embedding span information and the following attributes:

    • operation.name: ai.embed and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.embed"
    • ai.value: the value that was passed into the embed function
    • ai.embedding: a JSON-stringified embedding
  • ai.embed.doEmbed (span): a provider doEmbed call. It contains the basic embedding span information and the following attributes:

    • operation.name: ai.embed.doEmbed and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.embed.doEmbed"
    • ai.values: the values that were passed into the provider (array)
    • ai.embeddings: an array of JSON-stringified embeddings

embedMany function

embedMany records 2 types of spans:

  • ai.embedMany (span): the full length of the embedMany call. It contains 1 or more ai.embedMany.doEmbed spans. It contains the basic embedding span information and the following attributes:

    • operation.name: ai.embedMany and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.embedMany"
    • ai.values: the values that were passed into the embedMany function
    • ai.embeddings: an array of JSON-stringified embedding
  • ai.embedMany.doEmbed (span): a provider doEmbed call. It contains the basic embedding span information and the following attributes:

    • operation.name: ai.embedMany.doEmbed and the functionId that was set through telemetry.functionId
    • ai.operationId: "ai.embedMany.doEmbed"
    • ai.values: the values that were sent to the provider
    • ai.embeddings: an array of JSON-stringified embeddings for each value

Span Details

Basic LLM span information

Many spans that use LLMs (ai.generateText, ai.generateText.doGenerate, ai.streamText, ai.streamText.doStream) contain the following attributes:

  • resource.name: the functionId that was set through telemetry.functionId
  • ai.model.id: the id of the model
  • ai.model.provider: the provider of the model
  • ai.request.headers.*: the request headers that were passed in through headers
  • ai.response.providerMetadata: provider specific metadata returned with the generation response
  • ai.settings.maxRetries: the maximum number of retries that were set
  • ai.telemetry.functionId: the functionId that was set through telemetry.functionId
  • ai.telemetry.metadata.*: the metadata that was passed in through telemetry.metadata
  • ai.usage.completionTokens: the number of completion tokens that were used
  • ai.usage.promptTokens: the number of prompt tokens that were used

Call LLM span information

Spans that correspond to individual LLM calls (ai.generateText.doGenerate, ai.streamText.doStream) contain basic LLM span information and the following attributes:

  • ai.response.model: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
  • ai.response.id: the id of the response. Uses the ID from the provider when available.
  • ai.response.timestamp: the timestamp of the response. Uses the timestamp from the provider when available.
  • Semantic Conventions for GenAI operations
    • gen_ai.system: the provider that was used
    • gen_ai.request.model: the model that was requested
    • gen_ai.request.temperature: the temperature that was set
    • gen_ai.request.max_tokens: the maximum number of tokens that were set
    • gen_ai.request.frequency_penalty: the frequency penalty that was set
    • gen_ai.request.presence_penalty: the presence penalty that was set
    • gen_ai.request.top_k: the topK parameter value that was set
    • gen_ai.request.top_p: the topP parameter value that was set
    • gen_ai.request.stop_sequences: the stop sequences
    • gen_ai.response.finish_reasons: the finish reasons that were returned by the provider
    • gen_ai.response.model: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
    • gen_ai.response.id: the id of the response. Uses the ID from the provider when available.
    • gen_ai.usage.input_tokens: the number of prompt tokens that were used
    • gen_ai.usage.output_tokens: the number of completion tokens that were used

Basic embedding span information

Many spans that use embedding models (ai.embed, ai.embed.doEmbed, ai.embedMany, ai.embedMany.doEmbed) contain the following attributes:

  • ai.model.id: the id of the model
  • ai.model.provider: the provider of the model
  • ai.request.headers.*: the request headers that were passed in through headers
  • ai.settings.maxRetries: the maximum number of retries that were set
  • ai.telemetry.functionId: the functionId that was set through telemetry.functionId
  • ai.telemetry.metadata.*: the metadata that was passed in through telemetry.metadata
  • ai.usage.tokens: the number of tokens that were used
  • resource.name: the functionId that was set through telemetry.functionId

Tool call spans

Tool call spans (ai.toolCall) contain the following attributes:

  • operation.name: "ai.toolCall"
  • ai.operationId: "ai.toolCall"
  • ai.toolCall.name: the name of the tool
  • ai.toolCall.id: the id of the tool call
  • ai.toolCall.args: the input parameters of the tool call
  • ai.toolCall.result: the output result of the tool call. Only available if the tool call is successful and the result is serializable.