Skip to main content
You have working LLM code and want to add observability without rewriting anything. This guide shows you how to add HoneyHive tracing with a few lines in your app’s entry point or runtime setup layer, with no changes to your existing logic. What you need:
  • HoneyHive API key - get it at app.honeyhive.ai (click your org name → Copy API Key)
  • Your LLM provider’s API key (OpenAI, Anthropic, etc.)
Time: 5 minutes

Three Steps to Add Tracing

1. Install HoneyHive

pip install "honeyhive[openinference-openai]>=1.0.0rc0"

2. Add 5 Lines in Your App’s Runtime Entry Point

from honeyhive import HoneyHiveTracer 
from openinference.instrumentation.openai import OpenAIInstrumentor 
tracer = HoneyHiveTracer.init(api_key="your-key", project="your-project") 
instrumentor = OpenAIInstrumentor() 
instrumentor.instrument(tracer_provider=tracer.provider) 

# Your existing code below stays exactly the same
Keep the order the same in every runtime: initialize HoneyHiveTracer first, then initialize instrumentors with tracer.provider.
For the full runtime-specific patterns, see Tracer Initialization.

3. Run Your App

export HH_API_KEY="your-honeyhive-key"
export OPENAI_API_KEY="your-openai-key"
python your_app.py
View traces at app.honeyhive.ai → your project → Traces.

Integration Examples

Example 1: Simple Chatbot

# ========== ADD THESE 5 LINES ========== #
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer = HoneyHiveTracer.init(api_key="your-key", project="chatbot")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# ========== YOUR EXISTING CODE (NO CHANGES) ==========

import openai

client = openai.OpenAI()

def chat(message):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": message}]
    )
    return response.choices[0].message.content

if __name__ == "__main__":
    result = chat("Hello, how are you?")
    print(result)
That’s it - a small runtime setup block, zero changes to your existing functions.

Example 2: Multi-Step Application

# ========== ADD THESE 5 LINES ========== #
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.anthropic import AnthropicInstrumentor

tracer = HoneyHiveTracer.init(api_key="your-key", project="my-app")
instrumentor = AnthropicInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# ========== YOUR EXISTING CODE (NO CHANGES) ==========

import anthropic

def summarize_text(text):
    client = anthropic.Anthropic()
    response = client.messages.create(
        model="claude-haiku-4-5-20251001",
        max_tokens=500,
        messages=[{"role": "user", "content": f"Summarize: {text}"}]
    )
    return response.content[0].text

def generate_questions(summary):
    client = anthropic.Anthropic()
    response = client.messages.create(
        model="claude-haiku-4-5-20251001",
        max_tokens=300,
        messages=[{"role": "user", "content": f"Generate 3 questions: {summary}"}]
    )
    return response.content[0].text

if __name__ == "__main__":
    summary = summarize_text("Long article text here...")
    questions = generate_questions(summary)
    print(questions)
Both LLM calls are traced automatically. You’ll see the complete chain in HoneyHive.
For production, use environment variables instead of hardcoding keys:
from honeyhive import HoneyHiveTracer 
from openinference.instrumentation.openai import OpenAIInstrumentor 

# Reads HH_API_KEY and HH_PROJECT from environment 
tracer = HoneyHiveTracer.init() 
instrumentor = OpenAIInstrumentor() 
instrumentor.instrument(tracer_provider=tracer.provider) 
Set these environment variables:
export HH_API_KEY="your-honeyhive-key"
export HH_PROJECT="production-app"
export OPENAI_API_KEY="your-openai-key"

What Gets Traced?

All LLM SDK calls are traced automatically, including:
  • Chat completions, embeddings, and streaming
  • Function/tool calling
  • Multi-turn conversations
Each trace captures model, prompts, responses, tokens, latency, and costs. See integration guides for details: OpenAIAnthropicMore providers

Multiple Providers

Using OpenAI and Anthropic in the same app? Initialize both instrumentors with the same tracer:
from honeyhive import HoneyHiveTracer 
from openinference.instrumentation.openai import OpenAIInstrumentor 
from openinference.instrumentation.anthropic import AnthropicInstrumentor 

tracer = HoneyHiveTracer.init(api_key="your-key", project="my-app") 

OpenAIInstrumentor().instrument(tracer_provider=tracer.provider) 
AnthropicInstrumentor().instrument(tracer_provider=tracer.provider) 

# Both providers now traced

Troubleshooting

  • Check HH_API_KEY is set correctly
  • Verify project name matches between code and dashboard
  • Wait 2-3 seconds for processing
  • Look for errors in console output
Install with the extras for your provider:
pip install "honeyhive[openinference-openai]>=1.0.0rc0"
Add it in the entry point or runtime setup layer where your app boots. For scripts, that is often main.py or app.py. For Lambda, use cached setup outside the handler. For FastAPI, Flask, or Django, initialize once at app startup and create a session per request. See Tracer Initialization for the runtime-specific patterns.
For more help, see Troubleshooting Guide or join our Discord.

What’s Next?