Skip to main content
Google Gemini provides multimodal AI models for chat, vision, and function calling. HoneyHive integrates with Gemini via the OpenInference instrumentor, automatically capturing all API calls, function calls, and token usage.

Quick Start

Add HoneyHive tracing in just 4 lines of code. Add this to your existing Gemini app and all generate calls, function calls, and chat sessions are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive[openinference-google-ai]>=1.0.0rc0"

# Or install separately
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-google-genai google-genai
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.google_genai import GoogleGenAIInstrumentor

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
GoogleGenAIInstrumentor().instrument(tracer_provider=tracer.provider)

# Your existing Gemini code works unchanged

What Gets Traced

The instrumentor automatically captures:
  • Content generation - client.models.generate_content() with inputs, outputs, and token usage
  • Function calls - Each function call with arguments and results
  • Chat sessions - chat.send_message() with conversation history
  • Streaming responses - Streamed generations with aggregated tokens
No manual instrumentation required.

Example: Content Generation

import os
from google import genai
from google.genai import types
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.google_genai import GoogleGenAIInstrumentor

tracer = HoneyHiveTracer.init(project="your-project")
GoogleGenAIInstrumentor().instrument(tracer_provider=tracer.provider)

client = genai.Client(api_key=os.getenv("GOOGLE_API_KEY"))

# Simple content generation
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents="What is the capital of France?",
    config=types.GenerateContentConfig(max_output_tokens=100),
)
print(response.text)

# Chat session - also traced
chat = client.chats.create(model="gemini-2.0-flash")
chat_response = chat.send_message("Tell me a fun fact about Paris.")
print(chat_response.text)

Environment Configuration

# HoneyHive configuration
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project"

# Google AI configuration
export GOOGLE_API_KEY="your-google-ai-api-key"

Troubleshooting

Traces not appearing

  1. Check environment variables - Ensure HH_API_KEY and HH_PROJECT are set
  2. Pass the tracer provider - The instrumentor must receive tracer_provider=tracer.provider:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.google_genai import GoogleGenAIInstrumentor

tracer = HoneyHiveTracer.init(project="your-project")

# ✅ Correct - pass tracer_provider
GoogleGenAIInstrumentor().instrument(tracer_provider=tracer.provider)

# ❌ Wrong - missing tracer_provider
GoogleGenAIInstrumentor().instrument()
  1. Initialize before making calls - Call instrument() before creating the genai.Client

SDK version

The instrumentor works with the newer google-genai SDK. If you’re using the older google-generativeai package, migrate to google-genai:
# Newer SDK (recommended)
pip install google-genai openinference-instrumentation-google-genai

# Import pattern
from google import genai
from openinference.instrumentation.google_genai import GoogleGenAIInstrumentor

Google ADK

Google Agent Development Kit integration

Enrich Your Traces

Add user IDs and custom metadata to traces

Distributed Tracing

Trace calls across service boundaries

Resources