Add HoneyHive observability to your DSPy applications
DSPy is a framework for programming — not prompting — language models. It provides composable modules like ChainOfThought, ReAct, and custom Module classes that can be optimized and compiled.HoneyHive integrates with DSPy via the OpenInference DSPy instrumentor, which captures module calls, LM interactions, and tool executions as OpenTelemetry spans.
Add HoneyHive tracing in 4 lines of code. Initialize the tracer, create the instrumentors, and call .instrument() — all DSPy module calls, LLM requests, and tool executions are automatically traced.
import osimport dspyfrom openinference.instrumentation.dspy import DSPyInstrumentorfrom openinference.instrumentation.openai import OpenAIInstrumentorfrom honeyhive import HoneyHiveTracertracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)DSPyInstrumentor().instrument(tracer_provider=tracer.provider)OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)lm = dspy.LM(model="openai/gpt-4o-mini")dspy.configure(lm=lm)def lookup_order_status(order_id: str) -> str: """Look up the current status of a customer order.""" statuses = { "ORD-1001": "shipped, ETA 2 days", "ORD-1002": "processing, ETA 5 days", } return statuses.get(order_id.upper(), "not found")agent = dspy.ReAct( "question -> answer", tools=[lookup_order_status], max_iters=5,)result = agent(question="What is the status of order ORD-1002?")print(result.answer)
In HoneyHive, you’ll see the full trace: ReAct orchestration -> tool calls -> LLM reasoning steps.
Use the @trace decorator to wrap business logic that orchestrates multiple DSPy module calls. This creates a parent span encompassing the entire workflow, with DSPy module calls as child spans:
import osimport dspyfrom openinference.instrumentation.dspy import DSPyInstrumentorfrom openinference.instrumentation.openai import OpenAIInstrumentorfrom honeyhive import HoneyHiveTracer, enrich_span, tracetracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)DSPyInstrumentor().instrument(tracer_provider=tracer.provider)OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)lm = dspy.LM(model="openai/gpt-4o-mini")dspy.configure(lm=lm)class TriageSignature(dspy.Signature): """Classify a support request by priority and category.""" request: str = dspy.InputField() priority: str = dspy.OutputField(desc="high, medium, or low") category: str = dspy.OutputField(desc="billing, shipping, product, or general")@trace(event_type="chain")def handle_support_ticket(order_id: str, customer_message: str) -> dict: """End-to-end support ticket handler combining triage and resolution. The @trace decorator creates a parent span that wraps the entire business workflow. DSPy module calls inside are captured as child spans, giving a complete view of the processing pipeline. """ # Step 1: Triage the request triage = dspy.ChainOfThought(TriageSignature) triage_result = triage(request=customer_message) enrich_span( metadata={ "order_id": order_id, "priority": triage_result.priority, "category": triage_result.category, }, ) # Step 2: Resolve based on triage # SupportPipeline is defined in the Multi-Module Pipeline example above pipeline = SupportPipeline() resolution = pipeline(issue=customer_message) enrich_span( metrics={"steps_completed": 2}, ) return { "priority": triage_result.priority, "category": triage_result.category, "resolution": resolution.resolution, }ticket = handle_support_ticket( order_id="ORD-1003", customer_message="My order has been delayed for over a week. I need a refund.",)print(f"Priority: {ticket['priority']}, Category: {ticket['category']}")print(f"Resolution: {ticket['resolution']}")
The @trace decorator and enrich_span() give you:
A parent span for the full business workflow
Custom metadata (order ID, priority, category) attached to the span
Custom metrics (steps completed) for monitoring
DSPy module calls automatically nested as child spans
Call .instrument() before running any DSPy code — Instrumentation must be active before module execution:
from honeyhive import HoneyHiveTracerfrom openinference.instrumentation.dspy import DSPyInstrumentortracer = HoneyHiveTracer.init(project="your-project")# Instrument before running DSPy modulesDSPyInstrumentor().instrument(tracer_provider=tracer.provider)# Now run your DSPy coderesult = agent(question="Hello")
Check environment variables — Ensure HH_API_KEY and HH_PROJECT are set
Add the OpenAI instrumentor — DSPy uses LiteLLM/OpenAI under the hood. Adding the OpenAI instrumentor captures detailed LLM-level spans:
from openinference.instrumentation.openai import OpenAIInstrumentorOpenAIInstrumentor().instrument(tracer_provider=tracer.provider)
Clean up instrumentors on exit — Call .uninstrument() to avoid duplicate spans in long-running processes:
dspy_instrumentor = DSPyInstrumentor()dspy_instrumentor.instrument(tracer_provider=tracer.provider)try: # Your DSPy code passfinally: tracer.force_flush() dspy_instrumentor.uninstrument()