BYOI Advantage: HoneyHive has zero dependencies on the Google AI SDK. Use google-genai or google-generativeai at any version—no conflicts.
Compatibility
Python Version Support
| Support Level | Python Versions |
|---|
| Fully Supported | 3.11, 3.12, 3.13 |
| Not Supported | 3.10 and below |
Google AI SDK Requirements
For google-genai (newer SDK):
- Minimum: google-genai >= 1.0.0
- Recommended: google-genai >= 1.13.0
For google-generativeai (legacy SDK):
- Minimum: google-generativeai >= 0.3.0
- Recommended: google-generativeai >= 0.4.0
Known Limitations
- Streaming: Supported with manual span management
- Multimodal Input: Vision features traced but media content not captured
- Function Calling: Supported in Gemini Pro models
- Safety Settings: Not captured in traces by default
Choose Your Instrumentor
| Instrumentor | Status | Best For | Install |
|---|
| OpenInference | Fully Supported | Getting started, open-source | pip install "honeyhive[openinference-google-ai]>=1.0.0rc0" |
| Traceloop | Experimental | Production with cost tracking | pip install "honeyhive[traceloop-google-ai]>=1.0.0rc0" |
Quick Start with OpenInference
Installation
# Recommended: Install with Google AI integration
pip install "honeyhive[openinference-google-ai]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-google-generativeai google-generativeai>=0.3.0
Basic Setup (google-generativeai SDK)
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
import google.generativeai as genai
import os
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(
project="your-project" # Or set HH_PROJECT environment variable
) # Uses HH_API_KEY from environment
# Step 2: Initialize instrumentor with tracer_provider
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Configure Google AI
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
# Now all calls are automatically traced!
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content("Hello!")
print(response.text)
# ✨ Automatically traced!
Order matters! The tracer must be initialized BEFORE calling instrumentor.instrument().
Using the Newer google-genai SDK
For the newer google-genai SDK, use the @trace decorator pattern:
from google import genai
from honeyhive import HoneyHiveTracer, trace
import os
# Initialize tracer
HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project="my-project"
)
@trace
def generate_response(query: str) -> str:
"""Generate response using Gemini."""
client = genai.Client(api_key=os.getenv("GEMINI_API_KEY"))
response = client.models.generate_content(
model="gemini-2.0-flash",
contents=query
)
return response.text
# All calls within @trace are captured
result = generate_response("The opposite of hot is")
print(result)
Quick Start with Traceloop (Experimental)
Installation
# Recommended: Install with Traceloop Google AI integration
pip install "honeyhive[traceloop-google-ai]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" opentelemetry-instrumentation-google-generativeai google-generativeai>=0.3.0
Basic Setup
from honeyhive import HoneyHiveTracer
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
import google.generativeai as genai
import os
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(project="your-project")
# Step 2: Initialize Traceloop instrumentor with tracer_provider
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Configure Google AI
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
# All calls traced with enhanced metrics!
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content("Hello!")
Traceloop support for Google AI is experimental. Some Gemini-specific features may be in development.
Instrumentor Comparison
| Feature | OpenInference | Traceloop |
|---|
| Status | Fully Supported | Experimental |
| Token Tracking | Basic span attributes | Detailed metrics |
| Multimodal | Vision traced | Vision traced |
| Performance | Lightweight | Smart batching |
| Best For | Dev, getting started | Production |
Advanced Usage with @trace Decorator
Combine instrumentors with explicit tracing for full control:
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
import google.generativeai as genai
import os
# Initialize tracer and instrumentor
tracer = HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project="your-project",
source="production"
)
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
@trace
def analyze_with_gemini(content: str) -> dict:
"""Multi-model analysis with Gemini."""
# Add business context
enrich_span({
"use_case": "content_analysis",
"content_length": len(content)
})
# Quick analysis with Gemini Pro
pro_model = genai.GenerativeModel('gemini-pro')
summary = pro_model.generate_content(
f"Summarize this in one sentence: {content}"
)
# Detailed analysis
detailed = pro_model.generate_content(
f"Provide detailed analysis: {content}"
)
enrich_span({
"models_used": ["gemini-pro"],
"status": "success"
})
return {
"summary": summary.text,
"analysis": detailed.text
}
Example: Multimodal (Vision)
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
import google.generativeai as genai
import PIL.Image
import os
tracer = HoneyHiveTracer.init(project="vision-demo")
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
@trace
def analyze_image(image_path: str, question: str) -> str:
"""Analyze image using Gemini Pro Vision."""
enrich_span({
"task": "image_analysis",
"image_path": image_path
})
# Load image
img = PIL.Image.open(image_path)
# Use vision model
model = genai.GenerativeModel('gemini-pro-vision')
response = model.generate_content([question, img])
enrich_span({
"model": "gemini-pro-vision",
"has_response": bool(response.text)
})
return response.text
# Example usage
result = analyze_image("photo.jpg", "What's in this image?")
Media Content: The actual image/video bytes are not captured in traces to keep trace sizes manageable. Only metadata is logged.
Example: Function Calling
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
import google.generativeai as genai
import os
tracer = HoneyHiveTracer.init(project="function-calling-demo")
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
# Define tools
def get_weather(location: str) -> dict:
"""Get weather for a location."""
# Simulated weather data
return {"temp": "72°F", "conditions": "Sunny"}
weather_tool = genai.protos.Tool(
function_declarations=[
genai.protos.FunctionDeclaration(
name="get_weather",
description="Get weather for a location",
parameters=genai.protos.Schema(
type=genai.protos.Type.OBJECT,
properties={
"location": genai.protos.Schema(
type=genai.protos.Type.STRING
)
},
required=["location"]
)
)
]
)
@trace
def weather_assistant(query: str) -> str:
"""Assistant with weather tool."""
model = genai.GenerativeModel(
'gemini-pro',
tools=[weather_tool]
)
chat = model.start_chat()
response = chat.send_message(query)
# Handle function calls
if response.candidates[0].content.parts[0].function_call:
fc = response.candidates[0].content.parts[0].function_call
if fc.name == "get_weather":
weather = get_weather(fc.args["location"])
response = chat.send_message(
genai.protos.Content(
parts=[genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name="get_weather",
response=weather
)
)]
)
)
return response.text
result = weather_assistant("What's the weather in Paris?")
Environment Configuration
# HoneyHive configuration
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project"
export HH_SOURCE="production"
# Google AI configuration
export GOOGLE_API_KEY="your-google-ai-api-key"
# Or for newer SDK
export GEMINI_API_KEY="your-gemini-api-key"
What Gets Traced
With instrumentors initialized, these Google AI calls are automatically traced:
model.generate_content() - Content generation
chat.send_message() - Chat completions
- Streaming responses
- Function calling / tool use
- Vision API calls
Captured data includes:
- Model name and parameters
- Input content
- Output responses
- Token usage
- Latency metrics
- Errors and exceptions
Troubleshooting
Missing Traces
Ensure correct initialization order:
# ✅ Correct
tracer = HoneyHiveTracer.init(project="my-project")
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# ❌ Wrong - instrumentor before tracer
instrumentor = GoogleGenerativeAIInstrumentor()
instrumentor.instrument() # No tracer_provider!
tracer = HoneyHiveTracer.init(project="my-project")
SDK Version Mismatch
# For instrumentors, use google-generativeai
pip install google-generativeai>=0.3.0
# For manual tracing with @trace, you can use google-genai
pip install google-genai>=1.13.0
Import Errors
# For OpenInference
pip install "honeyhive[openinference-google-ai]>=1.0.0rc0"
# For Traceloop
pip install "honeyhive[traceloop-google-ai]>=1.0.0rc0"
Migration Between Instrumentors
From OpenInference to Traceloop:
# Before (OpenInference)
from openinference.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
# After (Traceloop) - just change the import
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAIInstrumentor
# Rest of the code stays the same!