Overview

Vercel’s AI SDK has built-in support for OpenTelemetry based tracing.

Use this guide if you have already instrumented Sentry with your NextJS application. If you are not using Sentry, please follow the instructions in this guide instead.

Prerequisite: Ensure that your Sentry instrumentation is enabled. You can refer to Sentry’s NextJS integration guide.

For production LLM monitoring and evaluation, you can add HoneyHive to your Sentry instrumentation with 3 easy steps:

  1. Set the HoneyHive endpoint and headers in your environment variables
  2. Add the HoneyHive span processor in your sentry.<client, server, edge>.config.ts files
  3. Generate a client-side sessionId and pass it to your AI SDK call to link multiple AI SDK requests to the same user session.

Step 1: Set HoneyHive endpoint and headers in your environment variables

To configure HoneyHive to consume NextJS’s telemetry data (routed via Sentry), you can set the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeyhive.ai/nextjs
HH_API_KEY=<your-honeyhive-api-key>
HH_PROJECT_NAME=<your-honeyhive-project-name>

OTEL_EXPORTER_OTLP_ENDPOINT is the HoneyHive API endpoint for consuming NextJS telemetry data. HH_API_KEY is the HoneyHive API key. HH_PROJECT_NAME is the HoneyHive project name.

If setting these in your .env file, make sure to reload your application after setting the variables.

Step 2: Add the HoneyHive span processor in your Sentry config files

In your Sentry instrumentation for NextJS, you will have one or more config files with the names:

  • sentry.client.config.ts
  • sentry.server.config.ts
  • sentry.edge.config.ts

To route the telemetry data to HoneyHive, we can add the HoneyHive OTEL span processor to the Sentry client.

First install the following OTEL libraries:

pnpm i @opentelemetry/exporter-trace-otlp-proto @opentelemetry/sdk-trace-node

Then, add the HoneyHive span processor to these files by pasting the following code:

import * as Sentry from "@sentry/nextjs";

// add the following imports
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';

// no changes needed here
const client = Sentry.init({
  dsn: "https://9ee4c459d...9031168",
  integrations: [
    ...
  ],
  tracesSampleRate: 1,
  replaysSessionSampleRate: 0.1,
  replaysOnErrorSampleRate: 1.0,
});

// add the following line at the end of the file
client?.traceProvider?.addSpanProcessor(
  new BatchSpanProcessor(new OTLPTraceExporter(
    {
      url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT,
      headers: {
        "Authorization": `Bearer ${process.env.HH_API_KEY}`,
        "x-honeyhive": `project:${process.env.HH_PROJECT_NAME}`,
      },
    }
  )),
);

This will ensure that the telemetry data is routed to HoneyHive for processing LLM traces.

Step 3: Connect your AI SDK calls to HoneyHive

Since your AI application likely make multiple API calls to the AI SDK, you will want to link multiple API calls to the same user chat session. To do this, we recommend generating a client-side sessionId and passing it to your AI SDK call. A valid sessionId is a random uuidv4 string. For example, you can generate a sessionId when your client-side page is mounted:

First, install uuid:

npm install uuid

Then, generate a sessionId when your client-side page is mounted:

import { v4 as uuidv4 } from 'uuid';

const [sessionId, setSessionId] = useState<string | null>(null);

useEffect(() => {
  setSessionId(uuidv4());
}, []);

Finally, you can pass the sessionId to your AI SDK call along with the other metadata:

const result = streamText({
  model: openai('gpt-4o'),
  messages,
  experimental_telemetry: {
    isEnabled: true,
    metadata: {
      sessionId, // your client-side sessionId
      sessionName: 'customer-support-chat', // your session name
      source: 'prod', // dev, prod, etc. Defaults to 'dev' if not set
      project: 'my-honeyhive-project', // only needed if not passed in headers
    },
  },
});

The sessionId will help us link multiple traces to the same user session.

You can find a complete example of this integration in our NextJS Cookbook.