Add HoneyHive observability to your Google Gemini applications
Google Gemini provides multimodal AI models for chat, vision, and function calling. HoneyHive integrates with Gemini via the OpenInference instrumentor, automatically capturing all API calls, function calls, and token usage.
Add HoneyHive tracing in just 4 lines of code. Add this to your existing Gemini app and all generate calls, function calls, and chat sessions are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive[openinference-google-ai]>=1.0.0rc0"# Or install separatelypip install "honeyhive>=1.0.0rc0" openinference-instrumentation-google-genai google-genai
import osfrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.google_genai import GoogleGenAIInstrumentortracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)GoogleGenAIInstrumentor().instrument(tracer_provider=tracer.provider)# Your existing Gemini code works unchanged
import osfrom google import genaifrom google.genai import typesfrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.google_genai import GoogleGenAIInstrumentortracer = HoneyHiveTracer.init(project="your-project")GoogleGenAIInstrumentor().instrument(tracer_provider=tracer.provider)client = genai.Client(api_key=os.getenv("GOOGLE_API_KEY"))# Simple content generationresponse = client.models.generate_content( model="gemini-2.0-flash", contents="What is the capital of France?", config=types.GenerateContentConfig(max_output_tokens=100),)print(response.text)# Chat session - also tracedchat = client.chats.create(model="gemini-2.0-flash")chat_response = chat.send_message("Tell me a fun fact about Paris.")print(chat_response.text)
# HoneyHive configurationexport HH_API_KEY="your-honeyhive-api-key"export HH_PROJECT="your-project"# Google AI configurationexport GOOGLE_API_KEY="your-google-ai-api-key"