BYOI Advantage: HoneyHive’s core SDK has zero dependencies on the OpenAI library. You control your openai version—upgrade immediately when new features ship, without waiting for SDK updates.
Compatibility
Python Version Support
| Support Level | Python Versions |
|---|
| Fully Supported | 3.11, 3.12, 3.13 |
| Not Supported | 3.10 and below |
OpenAI SDK Requirements
- Minimum: openai >= 1.0.0
- Recommended: openai >= 2.26.0
Known Limitations
- Streaming: Requires manual span finalization for proper trace completion
- Batch API: Limited instrumentor support, manual tracing recommended
- Function Calling: Fully supported with both instrumentors
- Vision API: Supported in OpenAI SDK >= 1.11.0, traced automatically
Choose Your Instrumentor
HoneyHive supports two instrumentor options for OpenAI:
| Instrumentor | Best For | Install |
|---|
| OpenInference | Open-source, lightweight, getting started | pip install "honeyhive[openinference-openai]>=1.0.0rc0" |
| Traceloop | Production, cost tracking, enhanced metrics | pip install "honeyhive[traceloop-openai]>=1.0.0rc0" |
Quick Start with OpenInference
Installation
# Recommended: Install with OpenAI integration
pip install "honeyhive[openinference-openai]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-openai openai>=1.0.0
Basic Setup
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
import os
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(
project="your-project" # Or set HH_PROJECT environment variable
) # Uses HH_API_KEY from environment
# Step 2: Initialize instrumentor with tracer_provider
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Now all OpenAI calls are automatically traced!
client = openai.OpenAI() # Uses OPENAI_API_KEY automatically
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# ✨ Automatically traced!
Order matters! The tracer must be initialized BEFORE calling instrumentor.instrument().
Quick Start with Traceloop
Installation
# Recommended: Install with Traceloop OpenAI integration
pip install "honeyhive[traceloop-openai]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" opentelemetry-instrumentation-openai openai>=1.0.0
Basic Setup
from honeyhive import HoneyHiveTracer
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
import openai
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(
project="your-project"
)
# Step 2: Initialize Traceloop instrumentor with tracer_provider
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# All OpenAI calls traced with enhanced metrics!
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}]
)
Instrumentor Comparison
| Feature | OpenInference | Traceloop |
|---|
| Setup Complexity | Simple | Simple |
| Token Tracking | Basic span attributes | Detailed metrics + costs |
| Model Metrics | Model name, timing | Cost per model, latency |
| Performance | Lightweight, fast | Optimized with batching |
| Cost Analysis | Manual calculation | Automatic per request |
| Best For | Simple integrations, dev | Production, cost optimization |
Advanced Usage with @trace Decorator
For explicit control over which functions are traced, combine instrumentors with the @trace decorator:
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
# Initialize tracer and instrumentor
tracer = HoneyHiveTracer.init(
api_key="your-honeyhive-key",
project="your-project",
source="production"
)
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
@trace
def multi_model_comparison(prompt: str) -> dict:
"""Compare responses across multiple models."""
client = openai.OpenAI()
# Add business context
enrich_span({
"use_case": "model_comparison",
"input_length": len(prompt)
})
models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-turbo-preview"]
results = []
for model in models:
try:
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
max_tokens=150
)
results.append({
"model": model,
"response": response.choices[0].message.content,
"usage": response.usage.model_dump() if response.usage else None
})
except Exception as e:
results.append({"model": model, "error": str(e)})
enrich_span({
"models_tested": len(models),
"successful": sum(1 for r in results if "response" in r)
})
return {"prompt": prompt, "results": results}
Multiple Instrumentors
You can use multiple instrumentors for different providers:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
from openinference.instrumentation.anthropic import AnthropicInstrumentor
# Step 1: Initialize tracer
tracer = HoneyHiveTracer.init(project="multi-provider-app")
# Step 2: Initialize multiple instrumentors
openai_instrumentor = OpenAIInstrumentor()
anthropic_instrumentor = AnthropicInstrumentor()
openai_instrumentor.instrument(tracer_provider=tracer.provider)
anthropic_instrumentor.instrument(tracer_provider=tracer.provider)
# Both OpenAI and Anthropic calls are now traced!
Environment Configuration
# HoneyHive configuration
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project"
export HH_SOURCE="production"
# OpenAI configuration
export OPENAI_API_KEY="your-openai-api-key"
What Gets Traced
With instrumentors initialized, these OpenAI calls are automatically traced:
client.chat.completions.create() - Chat completions
client.completions.create() - Legacy completions
client.embeddings.create() - Embeddings
- Streaming responses
- Function/tool calling
- Vision API calls
- Structured outputs (JSON mode, Pydantic)
Captured data includes:
- Model name and parameters
- Input prompts/messages
- Output responses
- Token usage (prompt, completion, total)
- Latency metrics
- Errors and exceptions
Example: Function Calling
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
import json
tracer = HoneyHiveTracer.init(project="function-calling-demo")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}
]
@trace
def weather_assistant(query: str):
"""Assistant with weather function calling."""
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": query}],
tools=tools,
tool_choice="auto"
)
# Handle function calls
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
# Simulate weather lookup
weather_result = {"temp": "72°F", "conditions": "Sunny"}
# Continue conversation with function result
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": query},
response.choices[0].message,
{
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(weather_result)
}
]
)
return response.choices[0].message.content
# All calls traced automatically!
result = weather_assistant("What's the weather in Paris?")
Example: Structured Outputs
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
from pydantic import BaseModel
import openai
tracer = HoneyHiveTracer.init(project="structured-outputs")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
class WeatherResponse(BaseModel):
location: str
temperature: float
unit: str
conditions: str
@trace
def get_structured_weather(location: str) -> WeatherResponse:
"""Get weather as structured Pydantic model."""
client = openai.OpenAI()
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[
{"role": "user", "content": f"What's the weather in {location}?"}
],
response_format=WeatherResponse
)
return completion.choices[0].message.parsed
weather = get_structured_weather("Tokyo")
print(f"{weather.location}: {weather.temperature}°{weather.unit}")
Troubleshooting
Missing Traces
Ensure correct initialization order:
# ✅ Correct
tracer = HoneyHiveTracer.init(project="my-project")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# ❌ Wrong - instrumentor before tracer
instrumentor = OpenAIInstrumentor()
instrumentor.instrument() # No tracer_provider!
tracer = HoneyHiveTracer.init(project="my-project")
Import Errors
# For OpenInference
pip install "honeyhive[openinference-openai]>=1.0.0rc0"
# For Traceloop
pip install "honeyhive[traceloop-openai]>=1.0.0rc0"
Different Projects for Client vs Server
Both must use the same project:
tracer = HoneyHiveTracer.init(
project="shared-project", # Must match across services
source="api-server" # Can differ per service
)
Migration Between Instrumentors
From OpenInference to Traceloop:
# Before (OpenInference)
from openinference.instrumentation.openai import OpenAIInstrumentor
# After (Traceloop) - just change the import
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# Rest of the code stays the same!
tracer = HoneyHiveTracer.init(project="your-project")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)