Documentation Index Fetch the complete documentation index at: https://docs.honeyhive.ai/llms.txt
Use this file to discover all available pages before exploring further.
Azure OpenAI provides OpenAI models hosted on Azure infrastructure. HoneyHive integrates with Azure OpenAI using the same OpenInference OpenAI instrumentor, automatically capturing all API calls, tool use, and token usage.
Quick Start
Add HoneyHive tracing in just 4 lines of code. Azure OpenAI uses the same openai Python package and the same instrumentor as standard OpenAI.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization .
pip install "honeyhive[openinference-azure-openai]"
# Or install separately
pip install honeyhive openinference-instrumentation-openai openai azure-identity
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init(
api_key = os.getenv( "HH_API_KEY" ),
project = os.getenv( "HH_PROJECT" ),
)
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
# Your existing Azure OpenAI code works unchanged
Azure OpenAI uses the standard openai Python package with the AzureOpenAI client class. The same OpenAIInstrumentor works for both.
Tested Versions
HoneyHive’s Azure OpenAI integration is tested against the following versions on PyPI, as of May 2026. Newer patch releases are generally safe; if you hit an issue, pin to these versions to reproduce a known-good configuration.
Package Version openinference-instrumentation-openai>= 0.1.0openai>= 1.0.0azure-identity>= 1.12.0
Requires Python 3.11+.
What Gets Traced
The instrumentor automatically captures:
Chat completions - client.chat.completions.create() with inputs, outputs, and token usage
Tool / function calls - Each tool call with arguments and results
Embeddings - client.embeddings.create() requests
Streaming responses - Streamed completions with aggregated tokens
No manual instrumentation required.
Example: Basic Chat Completion
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
tracer = HoneyHiveTracer.init( project = "your-project" )
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
client = AzureOpenAI(
api_key = os.getenv( "AZURE_OPENAI_API_KEY" ),
api_version = "2024-10-21" ,
azure_endpoint = os.getenv( "AZURE_OPENAI_ENDPOINT" , "" ),
)
response = client.chat.completions.create(
model = "gpt-4o-mini" , # Your deployment name
messages = [
{ "role" : "system" , "content" : "You are a helpful assistant." },
{ "role" : "user" , "content" : "What is the capital of France?" },
],
)
print (response.choices[ 0 ].message.content)
Environment Configuration
# HoneyHive configuration
export HH_API_KEY = "your-honeyhive-api-key"
export HH_PROJECT = "your-project"
# Azure OpenAI configuration
export AZURE_OPENAI_API_KEY = "your-azure-openai-key"
export AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
Troubleshooting
Traces not appearing
Check environment variables - Ensure HH_API_KEY and HH_PROJECT are set
Pass the tracer provider - The instrumentor must receive tracer_provider=tracer.provider:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init( project = "your-project" )
# ✅ Correct - pass tracer_provider
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
# ❌ Wrong - missing tracer_provider
OpenAIInstrumentor().instrument()
Initialize before making calls - Call instrument() before creating the AzureOpenAI client
Deployment name vs model name
In Azure OpenAI, the model parameter is your deployment name , not the underlying model name:
response = client.chat.completions.create(
model = "my-gpt4-deployment" , # This is your Azure deployment name
messages = [{ "role" : "user" , "content" : "Hello" }],
)
OpenAI Integration Standard OpenAI API integration
Enrich Your Traces Add user IDs and custom metadata to traces
Distributed Tracing Trace calls across service boundaries
Using Traceloop (OpenLLMetry) Instead
If your project already uses Traceloop / OpenLLMetry , you can use its OpenAI instrumentor instead of OpenInference. The setup is identical - only the install and import paths differ.
pip install "honeyhive[traceloop-azure-openai]"
# Or install separately
pip install honeyhive opentelemetry-instrumentation-openai openai azure-identity
import os
from honeyhive import HoneyHiveTracer
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from openai import AzureOpenAI
tracer = HoneyHiveTracer.init(
api_key = os.getenv( "HH_API_KEY" ),
project = os.getenv( "HH_PROJECT" ),
)
OpenAIInstrumentor().instrument( tracer_provider = tracer.provider)
client = AzureOpenAI(
api_key = os.getenv( "AZURE_OPENAI_API_KEY" ),
api_version = "2024-10-21" ,
azure_endpoint = os.getenv( "AZURE_OPENAI_ENDPOINT" , "" ),
)
response = client.chat.completions.create(
model = "gpt-4o-mini" , # Your deployment name
messages = [{ "role" : "user" , "content" : "What is the capital of France?" }],
)
print (response.choices[ 0 ].message.content)
Tested Versions
Package Version opentelemetry-instrumentation-openai>= 0.58.0, < 1.0.0openai>= 1.0.0azure-identity>= 1.12.0
Resources