OpenAI
Learn how to integrate OpenAI with HoneyHive
HoneyHive OpenAI Tracing Guide
This comprehensive guide explains how to use HoneyHive to trace and monitor OpenAI API calls. We’ll cover the setup process and explore each type of trace with practical examples from our cookbook code.
Getting Started
Installation
First, install the required packages as specified in requirements.txt
:
Basic Setup
To start tracing your OpenAI calls, initialize the HoneyHive tracer at the beginning of your application:
This initialization, found in all our example files, enables automatic instrumentation for all OpenAI API calls.
Types of OpenAI Traces
HoneyHive provides automatic instrumentation for various OpenAI features. Let’s examine each type in detail:
1. Basic Chat Completions
The most common OpenAI interaction is the chat completion, which HoneyHive traces automatically.
From basic_chat.py
:
What HoneyHive captures:
- Request details (model, messages, parameters)
- Response content
- Token usage (prompt, completion, total)
- Latency metrics
- Any errors or exceptions
Enhancing Chat Completion Traces
For richer context, add custom metadata and tags to your traces, as shown in basic_chat.py
:
This additional information makes it easier to filter, search, and analyze your traces in the HoneyHive dashboard.
2. Function Calling
Function calling is a powerful OpenAI feature that HoneyHive captures in detail. The trace includes the initial request, function execution, and final response.
From function_calling.py
:
Additionally, tracing the actual functions being called provides a complete picture:
What HoneyHive captures for function calling:
- The initial request with tools definition
- Function call arguments from the model
- Function execution details
- Second API call with function results
- Final assistant response
3. Structured Outputs
Structured outputs ensure the model’s response adheres to a specific format, either JSON or a Pydantic model. HoneyHive traces these specialized responses including the schema definition.
From structured_output.py
:
More advanced structured outputs using JSON schema:
And using Pydantic models:
What HoneyHive captures for structured outputs:
- The schema or model definition
- Response parsing process
- Structured data output
- Any parsing errors
4. Reasoning Models
OpenAI’s reasoning models (o1, o3-mini) have unique tracing needs, particularly around reasoning tokens and effort levels.
From reasoning_models.py
:
You can also compare different reasoning effort levels:
What HoneyHive captures for reasoning models:
- Standard request and response details
- Reasoning token usage
- Reasoning effort level
- Model-specific parameters
5. Multi-turn Conversations
Tracing conversations across multiple turns provides a complete history and context. From multi_turn_conversation.py
:
Using this class in a full conversation:
What HoneyHive captures for multi-turn conversations:
- Individual turns as separate traces
- Message history accumulation
- Token usage across turns
- Context of the entire conversation
- Relationships between turns
Conclusion
HoneyHive provides comprehensive observability for your OpenAI applications, giving you insights into performance, costs, and behavior. With automatic instrumentation and custom tracing, you can easily monitor and optimize your AI system.
Get started by initializing HoneyHive in your application and watch as your OpenAI calls are automatically traced!