Overview
Vercel’s AI SDK has built-in support for OpenTelemetry based tracing.Use this guide if you have already instrumented Sentry with your NextJS application. If you are not using Sentry, please follow the instructions in this guide instead.
- Set the HoneyHive endpoint and headers in your environment variables
- Add the HoneyHive span processor in your
sentry.<client, server, edge>.config.ts
files - Generate a client-side
sessionId
and pass it to your AI SDK call to link multiple AI SDK requests to the same user session.
Step 1: Set HoneyHive endpoint and headers in your environment variables
To configure HoneyHive to consume NextJS’s telemetry data (routed via Sentry), you can set the following environment variables:OTEL_EXPORTER_OTLP_ENDPOINT
is the HoneyHive API endpoint for consuming NextJS telemetry data.
HH_API_KEY
is the HoneyHive API key.
HH_PROJECT_NAME
is the HoneyHive project name.
If setting these in your .env
file, make sure to reload your application after setting the variables.
Step 2: Add the HoneyHive span processor in your Sentry config files
In your Sentry instrumentation for NextJS, you will have one or more config files with the names:sentry.client.config.ts
sentry.server.config.ts
sentry.edge.config.ts
Step 3: Connect your AI SDK calls to HoneyHive
Since your AI application likely make multiple API calls to the AI SDK, you will want to link multiple API calls to the same user chat session. To do this, we recommend generating a client-sidesessionId
and passing it to your AI SDK call.
A valid sessionId
is a random uuidv4 string.
For example, you can generate a sessionId when your client-side page is mounted:
First, install uuid
:
sessionId
will help us link multiple traces to the same user session.
You can find a complete example of this integration in our NextJS Cookbook.