Add HoneyHive tracing to your existing LLM application with just 5 lines of code
You have working LLM code and want to add observability without rewriting anything. This guide shows you how to add HoneyHive tracing with a few lines in your app’s entry point or runtime setup layer, with no changes to your existing logic.What you need:
HoneyHive API key - get it at app.honeyhive.ai (click your org name → Copy API Key)
Your LLM provider’s API key (OpenAI, Anthropic, etc.)
from honeyhive import HoneyHiveTracer from openinference.instrumentation.openai import OpenAIInstrumentor tracer = HoneyHiveTracer.init(api_key="your-key", project="your-project") instrumentor = OpenAIInstrumentor() instrumentor.instrument(tracer_provider=tracer.provider) # Your existing code below stays exactly the same
Keep the order the same in every runtime: initialize HoneyHiveTracer first, then initialize instrumentors with tracer.provider.
Using OpenAI and Anthropic in the same app? Initialize both instrumentors with the same tracer:
Copy
Ask AI
from honeyhive import HoneyHiveTracer from openinference.instrumentation.openai import OpenAIInstrumentor from openinference.instrumentation.anthropic import AnthropicInstrumentor tracer = HoneyHiveTracer.init(api_key="your-key", project="my-app") OpenAIInstrumentor().instrument(tracer_provider=tracer.provider) AnthropicInstrumentor().instrument(tracer_provider=tracer.provider) # Both providers now traced
Multiple files in my app - where do I add the tracing setup?
Add it in the entry point or runtime setup layer where your app boots. For scripts, that is often main.py or app.py. For Lambda, use cached setup outside the handler. For FastAPI, Flask, or Django, initialize once at app startup and create a session per request. See Tracer Initialization for the runtime-specific patterns.