Azure OpenAI provides OpenAI models hosted on Azure infrastructure. HoneyHive integrates with Azure OpenAI using the same OpenInference OpenAI instrumentor, automatically capturing all API calls, tool use, and token usage.
Quick Start
Add HoneyHive tracing in just 4 lines of code. Azure OpenAI uses the same openai Python package and the same instrumentor as standard OpenAI.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization .
pip install "honeyhive[openinference-openai]>=1.0.0rc0"
# Or install separately
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-openai openai
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init(
api_key = os.getenv( "HH_API_KEY" ),
project = os.getenv( "HH_PROJECT" ),
)
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
# Your existing Azure OpenAI code works unchanged
Azure OpenAI uses the standard openai Python package with the AzureOpenAI client class. The same OpenAIInstrumentor works for both.
What Gets Traced
The instrumentor automatically captures:
Chat completions - client.chat.completions.create() with inputs, outputs, and token usage
Tool / function calls - Each tool call with arguments and results
Embeddings - client.embeddings.create() requests
Streaming responses - Streamed completions with aggregated tokens
No manual instrumentation required.
Example: Basic Chat Completion
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
tracer = HoneyHiveTracer.init( project = "your-project" )
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
client = AzureOpenAI(
api_key = os.getenv( "AZURE_OPENAI_API_KEY" ),
api_version = "2024-10-21" ,
azure_endpoint = os.getenv( "AZURE_OPENAI_ENDPOINT" , "" ),
)
response = client.chat.completions.create(
model = "gpt-4o-mini" , # Your deployment name
messages = [
{ "role" : "system" , "content" : "You are a helpful assistant." },
{ "role" : "user" , "content" : "What is the capital of France?" },
],
)
print (response.choices[ 0 ].message.content)
Environment Configuration
# HoneyHive configuration
export HH_API_KEY = "your-honeyhive-api-key"
export HH_PROJECT = "your-project"
# Azure OpenAI configuration
export AZURE_OPENAI_API_KEY = "your-azure-openai-key"
export AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
Troubleshooting
Traces not appearing
Check environment variables - Ensure HH_API_KEY and HH_PROJECT are set
Pass the tracer provider - The instrumentor must receive tracer_provider=tracer.provider:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init( project = "your-project" )
# ✅ Correct - pass tracer_provider
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
# ❌ Wrong - missing tracer_provider
OpenAIInstrumentor().instrument()
Initialize before making calls - Call instrument() before creating the AzureOpenAI client
Deployment name vs model name
In Azure OpenAI, the model parameter is your deployment name , not the underlying model name:
response = client.chat.completions.create(
model = "my-gpt4-deployment" , # This is your Azure deployment name
messages = [{ "role" : "user" , "content" : "Hello" }],
)
OpenAI Integration Standard OpenAI API integration
Enrich Your Traces Add user IDs and custom metadata to traces
Distributed Tracing Trace calls across service boundaries
Resources