Skip to main content
OpenAI provides GPT models for chat completions, embeddings, and more. HoneyHive integrates with OpenAI via the OpenInference instrumentor, automatically capturing all API calls, tool use, and token usage.

Quick Start

Add HoneyHive tracing in just 4 lines of code. Add this to your existing OpenAI app and all chat completions, tool calls, and embeddings are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive[openinference-openai]>=1.0.0rc0"

# Or install separately
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-openai openai
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

# Your existing OpenAI code works unchanged

What Gets Traced

The instrumentor automatically captures:
  • Chat completions - client.chat.completions.create() with inputs, outputs, and token usage
  • Tool / function calls - Each tool call with arguments and results
  • Embeddings - client.embeddings.create() requests
  • Streaming responses - Streamed completions with aggregated tokens
No manual instrumentation required.

Example: Chat Completion

import openai
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer = HoneyHiveTracer.init(project="your-project")
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

client = openai.OpenAI()

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"},
    ],
    max_tokens=50,
)
print(response.choices[0].message.content)

# A follow-up call - also traced
response2 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "Tell me a fun fact about Paris."},
    ],
    max_tokens=100,
)
print(response2.choices[0].message.content)

Environment Configuration

# HoneyHive configuration
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project"

# OpenAI configuration
export OPENAI_API_KEY="your-openai-api-key"

Troubleshooting

Traces not appearing

  1. Check environment variables - Ensure HH_API_KEY and HH_PROJECT are set
  2. Pass the tracer provider - The instrumentor must receive tracer_provider=tracer.provider:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer = HoneyHiveTracer.init(project="your-project")

# ✅ Correct - pass tracer_provider
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

# ❌ Wrong - missing tracer_provider
OpenAIInstrumentor().instrument()
  1. Initialize before making calls - Call instrument() before creating the OpenAI client

Enrich Your Traces

Add user IDs and custom metadata to traces

Custom Spans

Create spans for business logic around API calls

Distributed Tracing

Trace calls across service boundaries

Resources