Add HoneyHive observability to your OpenAI applications
OpenAI provides GPT models for chat completions, embeddings, and more. HoneyHive integrates with OpenAI via the OpenInference instrumentor, automatically capturing all API calls, tool use, and token usage.
Add HoneyHive tracing in just 4 lines of code. Add this to your existing OpenAI app and all chat completions, tool calls, and embeddings are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive[openinference-openai]>=1.0.0rc0"# Or install separatelypip install "honeyhive>=1.0.0rc0" openinference-instrumentation-openai openai
import osfrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.openai import OpenAIInstrumentortracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)# Your existing OpenAI code works unchanged
import openaifrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.openai import OpenAIInstrumentortracer = HoneyHiveTracer.init(project="your-project")OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)client = openai.OpenAI()response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is the capital of France?"}, ], max_tokens=50,)print(response.choices[0].message.content)# A follow-up call - also tracedresponse2 = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "user", "content": "Tell me a fun fact about Paris."}, ], max_tokens=100,)print(response2.choices[0].message.content)