LiteLLM
Learn how to integrate HoneyHive tracing with LiteLLM for monitoring and optimizing LLM calls
LiteLLM Integration with HoneyHive
This guide demonstrates how to integrate HoneyHive tracing with LiteLLM, a unified interface for calling 100+ LLMs using the OpenAI format, to monitor and optimize your LLM operations.
Prerequisites
- A HoneyHive account and API key
- Python 3.8+
- Basic understanding of LLMs and tracing
Installation
First, install the required packages:
Setup and Configuration
Initialize HoneyHive Tracer
Start by initializing the HoneyHive tracer at the beginning of your application:
Configure LiteLLM
Next, set up LiteLLM with your API keys:
Tracing LiteLLM Operations
Initialize LiteLLM with Tracing
Use the @trace
decorator to monitor LiteLLM initialization:
Generate Completions with Tracing
Trace the completion generation process:
Generate Chat Completions with Tracing
Trace chat completion operations:
Generate Embeddings with Tracing
Monitor embedding generation:
Complete Example
Here’s a complete example of using LiteLLM with HoneyHive tracing:
What’s Being Traced
With this integration, HoneyHive captures:
- LiteLLM Initialization: Configuration and setup of LiteLLM
- Completion Generation: Performance metrics for generating completions
- Chat Completion Generation: Metrics for chat-based completions
- Embedding Generation: Performance of embedding operations
- Fallback Processing: Success rates and performance of fallback mechanisms
- Batch Processing: Metrics for processing multiple prompts
Viewing Traces in HoneyHive
After running your application:
- Log into your HoneyHive account
- Navigate to your project
- View the traces in the Sessions tab
- Analyze the performance of each LLM operation
Advanced Features
Tracing with Model Fallbacks
LiteLLM supports fallback mechanisms when a primary model fails. You can trace this behavior to understand failure patterns:
Tracing Batch Processing
For batch operations, you can trace the entire batch process as well as individual completions:
Best Practices
- Use descriptive session names to easily identify different runs
- Add custom attributes to traces for more detailed analysis
- Trace both successful operations and error handling paths
- Consider tracing with different model configurations to compare performance
- Use HoneyHive’s evaluation capabilities to assess response quality
Troubleshooting
If you encounter issues with tracing:
- Ensure your HoneyHive API key is correct
- Verify that all required packages are installed
- Check that your LiteLLM API keys are valid
- Review the HoneyHive documentation for additional troubleshooting steps
Next Steps
- Experiment with different LLM providers through LiteLLM
- Add custom metrics to your traces
- Implement A/B testing of different models
- Explore HoneyHive’s evaluation capabilities for your LLM responses
By integrating HoneyHive with LiteLLM, you gain valuable insights into your LLM operations and can optimize for better performance, cost-efficiency, and response quality.