HoneyHive OpenAI Tracing Guide
This comprehensive guide explains how to use HoneyHive to trace and monitor OpenAI API calls. We’ll cover the setup process and explore each type of trace with practical examples from our cookbook code.Getting Started
Installation
First, install the required packages as specified inrequirements.txt
:
Basic Setup
To start tracing your OpenAI calls, initialize the HoneyHive tracer at the beginning of your application:Types of OpenAI Traces
HoneyHive provides automatic instrumentation for various OpenAI features. Let’s examine each type in detail:1. Basic Chat Completions
The most common OpenAI interaction is the chat completion, which HoneyHive traces automatically. Frombasic_chat.py
:
- Request details (model, messages, parameters)
- Response content
- Token usage (prompt, completion, total)
- Latency metrics
- Any errors or exceptions
Enhancing Chat Completion Traces
For richer context, add custom metadata and tags to your traces, as shown inbasic_chat.py
:
2. Function Calling
Function calling is a powerful OpenAI feature that HoneyHive captures in detail. The trace includes the initial request, function execution, and final response. Fromfunction_calling.py
:
- The initial request with tools definition
- Function call arguments from the model
- Function execution details
- Second API call with function results
- Final assistant response
3. Structured Outputs
Structured outputs ensure the model’s response adheres to a specific format, either JSON or a Pydantic model. HoneyHive traces these specialized responses including the schema definition. Fromstructured_output.py
:
- The schema or model definition
- Response parsing process
- Structured data output
- Any parsing errors
4. Reasoning Models
OpenAI’s reasoning models (o1, o3-mini) have unique tracing needs, particularly around reasoning tokens and effort levels. Fromreasoning_models.py
:
- Standard request and response details
- Reasoning token usage
- Reasoning effort level
- Model-specific parameters
5. Multi-turn Conversations
Tracing conversations across multiple turns provides a complete history and context. Frommulti_turn_conversation.py
:
- Individual turns as separate traces
- Message history accumulation
- Token usage across turns
- Context of the entire conversation
- Relationships between turns