BYOI Advantage: HoneyHive has zero dependencies on the OpenAI SDK. Use openai at any version—no conflicts with your Azure deployment requirements.
Compatibility
Python Version Support
| Support Level | Python Versions |
|---|
| Fully Supported | 3.11, 3.12, 3.13 |
| Not Supported | 3.10 and below |
OpenAI SDK Requirements
- Minimum: openai >= 1.0.0
- Recommended: openai >= 1.10.0
- Tested Versions: 1.10.0, 1.11.0, 1.12.0, 1.13.0
Azure OpenAI uses the standard openai Python package with the AzureOpenAI client class.
Known Limitations
- Deployment Names: Must configure Azure deployment names separately from model names
- API Versions: Requires Azure API version in configuration
- Managed Identity: Supported with additional Azure SDK configuration
- Streaming: Fully supported with both instrumentors
Choose Your Instrumentor
Azure OpenAI uses the same OpenAI instrumentor as the standard OpenAI API:
| Instrumentor | Status | Best For | Install |
|---|
| OpenInference | Fully Supported | All Azure OpenAI models | pip install "honeyhive[openinference-openai]>=1.0.0rc0" |
| Traceloop | Fully Supported | Production with cost tracking | pip install "honeyhive[traceloop-openai]>=1.0.0rc0" |
Quick Start with OpenInference
Installation
# Recommended: Install with OpenAI integration (works for Azure too)
pip install "honeyhive[openinference-openai]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-openai openai>=1.0.0
Basic Setup
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
import os
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(
project="your-project" # Or set HH_PROJECT environment variable
) # Uses HH_API_KEY from environment
# Step 2: Initialize instrumentor with tracer_provider
# Same instrumentor works for both OpenAI and Azure OpenAI!
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Create Azure OpenAI client
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
# Now all Azure OpenAI calls are automatically traced!
response = client.chat.completions.create(
model="gpt-35-turbo", # Your deployment name
messages=[{"role": "user", "content": "Hello from Azure!"}]
)
# ✨ Automatically traced!
Order matters! The tracer must be initialized BEFORE calling instrumentor.instrument().
Quick Start with Traceloop
Installation
# Recommended: Install with Traceloop OpenAI integration
pip install "honeyhive[traceloop-openai]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" opentelemetry-instrumentation-openai openai>=1.0.0
Basic Setup
from honeyhive import HoneyHiveTracer
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
import os
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(project="your-project")
# Step 2: Initialize Traceloop instrumentor
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Create Azure OpenAI client and use normally
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
response = client.chat.completions.create(
model="gpt-35-turbo",
messages=[{"role": "user", "content": "Hello!"}]
)
Instrumentor Comparison
| Feature | OpenInference | Traceloop |
|---|
| Status | Fully Supported | Fully Supported |
| Token Tracking | Basic | Detailed + costs |
| Azure Quotas | Basic | Enhanced tracking |
| Performance | Lightweight | Smart batching |
| Best For | Dev, simple integration | Production |
Example: Basic Chat Completion
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
import os
tracer = HoneyHiveTracer.init(project="azure-demo")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
@trace
def chat(message: str) -> str:
"""Simple chat with Azure OpenAI."""
response = client.chat.completions.create(
model="gpt-35-turbo", # Your deployment name
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
],
temperature=0.7,
max_tokens=150
)
return response.choices[0].message.content
result = chat("What is the capital of France?")
Example: Function Calling
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
import os
import json
tracer = HoneyHiveTracer.init(project="azure-function-calling")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country, e.g., 'Paris, France'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
]
@trace
def weather_assistant(query: str):
"""Assistant with weather tool."""
enrich_span({"task": "function_calling"})
response = client.chat.completions.create(
model="gpt-4", # Your GPT-4 deployment name
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": query}
],
tools=tools,
tool_choice="auto"
)
message = response.choices[0].message
# Handle tool call
if message.tool_calls:
tool_call = message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
# Simulate weather lookup
weather_result = {"temp": "18°C", "conditions": "Partly cloudy"}
# Continue conversation with tool result
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": query},
message,
{
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(weather_result)
}
]
)
return response.choices[0].message.content
return message.content
result = weather_assistant("What's the weather in Paris?")
Example: Structured Output
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
from pydantic import BaseModel
import os
tracer = HoneyHiveTracer.init(project="azure-structured")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
class WeatherInfo(BaseModel):
location: str
temperature: float
unit: str
conditions: str
humidity: int
@trace
def get_weather_structured(location: str) -> WeatherInfo:
"""Get structured weather using Pydantic."""
completion = client.beta.chat.completions.parse(
model="gpt-4", # Your deployment name
messages=[
{"role": "system", "content": "Provide weather info."},
{"role": "user", "content": f"Weather in {location}?"}
],
response_format=WeatherInfo
)
return completion.choices[0].message.parsed
weather = get_weather_structured("Tokyo, Japan")
Example: Multi-Deployment Comparison
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
import os
tracer = HoneyHiveTracer.init(project="azure-comparison")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
@trace
def compare_deployments(prompt: str) -> dict:
"""Compare responses across Azure deployments."""
deployments = [
"gpt-35-turbo", # Your GPT-3.5 deployment
"gpt-4", # Your GPT-4 deployment
"gpt-4-turbo" # Your GPT-4 Turbo deployment
]
results = {}
for deployment in deployments:
try:
response = client.chat.completions.create(
model=deployment,
messages=[{"role": "user", "content": prompt}],
max_tokens=150
)
results[deployment] = {
"content": response.choices[0].message.content,
"tokens": response.usage.total_tokens
}
except Exception as e:
results[deployment] = {"error": str(e)}
enrich_span({
"deployments_tested": deployments,
"successful": len([r for r in results.values() if "error" not in r])
})
return results
comparison = compare_deployments("Explain cloud computing briefly.")
Example: Multi-Turn Conversation
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
import os
tracer = HoneyHiveTracer.init(project="azure-conversation")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
class Conversation:
def __init__(self, system_message: str = "You are a helpful assistant."):
self.messages = [{"role": "system", "content": system_message}]
@trace
def chat(self, user_message: str) -> str:
"""Add message and get response."""
self.messages.append({"role": "user", "content": user_message})
response = client.chat.completions.create(
model="gpt-4",
messages=self.messages,
max_tokens=150
)
assistant_message = response.choices[0].message.content
self.messages.append({"role": "assistant", "content": assistant_message})
return assistant_message
# Usage
conv = Conversation("You are a knowledgeable assistant.")
print(conv.chat("Tell me about the Apollo 11 mission."))
print(conv.chat("Who were the astronauts?"))
print(conv.chat("Now explain photosynthesis."))
Environment Configuration
# HoneyHive configuration
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project"
export HH_SOURCE="production"
# Azure OpenAI configuration
export AZURE_OPENAI_API_KEY="your-azure-openai-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_VERSION="2024-02-01"
# Optional: Deployment names
export GPT35_DEPLOYMENT="gpt-35-turbo"
export GPT4_DEPLOYMENT="gpt-4"
What Gets Traced
With instrumentors initialized, these Azure OpenAI calls are automatically traced:
client.chat.completions.create() - Chat completions
client.completions.create() - Text completions
client.embeddings.create() - Embeddings
client.images.generate() - Image generation (DALL-E)
- Streaming responses
Captured data includes:
- Deployment name and model
- Input messages
- Output responses
- Token usage
- Latency metrics
- Azure-specific metadata
- Errors and exceptions
Troubleshooting
Missing Traces
Ensure correct initialization order:
# ✅ Correct
tracer = HoneyHiveTracer.init(project="my-project")
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# ❌ Wrong - instrumentor before tracer
instrumentor = OpenAIInstrumentor()
instrumentor.instrument() # No tracer_provider!
tracer = HoneyHiveTracer.init(project="my-project")
Deployment Name vs Model Name
# In Azure OpenAI, 'model' parameter is your deployment name
response = client.chat.completions.create(
model="my-gpt4-deployment", # This is your Azure deployment name
messages=[{"role": "user", "content": "Hello"}]
)
API Version Errors
# Use a supported API version
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-01", # Check Azure docs for latest
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
Import Errors
# Same instrumentor works for OpenAI and Azure OpenAI
pip install "honeyhive[openinference-openai]>=1.0.0rc0"
# Or Traceloop
pip install "honeyhive[traceloop-openai]>=1.0.0rc0"
Migration Between Instrumentors
From OpenInference to Traceloop:
# Before (OpenInference)
from openinference.instrumentation.openai import OpenAIInstrumentor
# After (Traceloop) - just change the import
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# Rest of the code stays the same!