Vercel AI SDK + NextJS + Sentry Integration
This guide explains how to instrument HoneyHive in a NextJS application that uses the Vercel AI SDK and Sentry.
Overview
Vercel’s AI SDK has built-in support for OpenTelemetry based tracing.
Use this guide if you have already instrumented Sentry with your NextJS application. If you are not using Sentry, please follow the instructions in this guide instead.
Prerequisite: Ensure that your Sentry instrumentation is enabled. You can refer to Sentry’s NextJS integration guide.
For production LLM monitoring and evaluation, you can add HoneyHive to your Sentry instrumentation with 3 easy steps:
- Set the HoneyHive endpoint and headers in your environment variables
- Add the HoneyHive span processor in your
sentry.<client, server, edge>.config.ts
files - Generate a client-side
sessionId
and pass it to your AI SDK call to link multiple AI SDK requests to the same user session.
Step 1: Set HoneyHive endpoint and headers in your environment variables
To configure HoneyHive to consume NextJS’s telemetry data (routed via Sentry), you can set the following environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT
is the HoneyHive API endpoint for consuming NextJS telemetry data.
HH_API_KEY
is the HoneyHive API key.
HH_PROJECT_NAME
is the HoneyHive project name.
If setting these in your .env
file, make sure to reload your application after setting the variables.
Step 2: Add the HoneyHive span processor in your Sentry config files
In your Sentry instrumentation for NextJS, you will have one or more config files with the names:
sentry.client.config.ts
sentry.server.config.ts
sentry.edge.config.ts
To route the telemetry data to HoneyHive, we can add the HoneyHive OTEL span processor to the Sentry client.
First install the following OTEL libraries:
Then, add the HoneyHive span processor to these files by pasting the following code:
This will ensure that the telemetry data is routed to HoneyHive for processing LLM traces.
Step 3: Connect your AI SDK calls to HoneyHive
Since your AI application likely make multiple API calls to the AI SDK, you will want to link multiple API calls to the same user chat session.
To do this, we recommend generating a client-side sessionId
and passing it to your AI SDK call.
A valid sessionId
is a random uuidv4 string.
For example, you can generate a sessionId when your client-side page is mounted:
First, install uuid
:
Then, generate a sessionId when your client-side page is mounted:
Finally, you can pass the sessionId to your AI SDK call along with the other metadata:
The sessionId
will help us link multiple traces to the same user session.
You can find a complete example of this integration in our NextJS Cookbook.