Skip to main content
DSPy is a framework for programming — not prompting — language models. It provides composable modules like ChainOfThought, ReAct, and custom Module classes that can be optimized and compiled. HoneyHive integrates with DSPy via the OpenInference DSPy instrumentor, which captures module calls, LM interactions, and tool executions as OpenTelemetry spans.

Quick Start

Add HoneyHive tracing in 4 lines of code. Initialize the tracer, create the instrumentors, and call .instrument() — all DSPy module calls, LLM requests, and tool executions are automatically traced.
pip install "honeyhive>=1.0.0rc0" dspy openinference-instrumentation-dspy openinference-instrumentation-openai
import os
import dspy
from openinference.instrumentation.dspy import DSPyInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from honeyhive import HoneyHiveTracer

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

DSPyInstrumentor().instrument(tracer_provider=tracer.provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

# Your existing DSPy code works unchanged
lm = dspy.LM(model="openai/gpt-4o-mini")
dspy.configure(lm=lm)

What Gets Traced

The instrumentation automatically captures:
  • Module calls — Every forward() invocation with inputs and outputs (ReAct, ChainOfThought, Predict, custom Modules)
  • LLM calls — Model requests, responses, and token usage via the OpenAI instrumentor
  • Tool executions — Each tool call with arguments and results (in ReAct agents)
  • Pipeline composition — Parent-child nesting when Modules call sub-Modules
No manual instrumentation required.

Example: ReAct Agent with Tools

import os
import dspy
from openinference.instrumentation.dspy import DSPyInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from honeyhive import HoneyHiveTracer

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
DSPyInstrumentor().instrument(tracer_provider=tracer.provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

lm = dspy.LM(model="openai/gpt-4o-mini")
dspy.configure(lm=lm)


def lookup_order_status(order_id: str) -> str:
    """Look up the current status of a customer order."""
    statuses = {
        "ORD-1001": "shipped, ETA 2 days",
        "ORD-1002": "processing, ETA 5 days",
    }
    return statuses.get(order_id.upper(), "not found")


agent = dspy.ReAct(
    "question -> answer",
    tools=[lookup_order_status],
    max_iters=5,
)

result = agent(question="What is the status of order ORD-1002?")
print(result.answer)
In HoneyHive, you’ll see the full trace: ReAct orchestration -> tool calls -> LLM reasoning steps.

Example: Multi-Module Pipeline

DSPy’s Module class lets you compose sub-modules into pipelines. Each module call is traced as a separate span with correct parent-child nesting:
import os
import dspy
from openinference.instrumentation.dspy import DSPyInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from honeyhive import HoneyHiveTracer

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
DSPyInstrumentor().instrument(tracer_provider=tracer.provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

lm = dspy.LM(model="openai/gpt-4o-mini")
dspy.configure(lm=lm)


class AnalyzeSignature(dspy.Signature):
    """Analyze a customer issue and summarize the situation."""
    issue: str = dspy.InputField()
    analysis: str = dspy.OutputField()


class ResolveSignature(dspy.Signature):
    """Draft a resolution for a customer support case."""
    analysis: str = dspy.InputField()
    resolution: str = dspy.OutputField()


class SupportPipeline(dspy.Module):
    def __init__(self):
        super().__init__()
        self.analyzer = dspy.ChainOfThought(AnalyzeSignature)
        self.resolver = dspy.ChainOfThought(ResolveSignature)

    def forward(self, issue: str) -> dspy.Prediction:
        analysis = self.analyzer(issue=issue)
        return self.resolver(analysis=analysis.analysis)


pipeline = SupportPipeline()
result = pipeline(issue="Order delayed over a week, customer wants a refund.")
print(result.resolution)

Example: Custom Business Logic with @trace

Use the @trace decorator to wrap business logic that orchestrates multiple DSPy module calls. This creates a parent span encompassing the entire workflow, with DSPy module calls as child spans:
import os
import dspy
from openinference.instrumentation.dspy import DSPyInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from honeyhive import HoneyHiveTracer, enrich_span, trace

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
DSPyInstrumentor().instrument(tracer_provider=tracer.provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)

lm = dspy.LM(model="openai/gpt-4o-mini")
dspy.configure(lm=lm)


class TriageSignature(dspy.Signature):
    """Classify a support request by priority and category."""
    request: str = dspy.InputField()
    priority: str = dspy.OutputField(desc="high, medium, or low")
    category: str = dspy.OutputField(desc="billing, shipping, product, or general")


@trace(event_type="chain")
def handle_support_ticket(order_id: str, customer_message: str) -> dict:
    """End-to-end support ticket handler combining triage and resolution.

    The @trace decorator creates a parent span that wraps the entire
    business workflow. DSPy module calls inside are captured as child
    spans, giving a complete view of the processing pipeline.
    """
    # Step 1: Triage the request
    triage = dspy.ChainOfThought(TriageSignature)
    triage_result = triage(request=customer_message)

    enrich_span(
        metadata={
            "order_id": order_id,
            "priority": triage_result.priority,
            "category": triage_result.category,
        },
    )

    # Step 2: Resolve based on triage
    # SupportPipeline is defined in the Multi-Module Pipeline example above
    pipeline = SupportPipeline()
    resolution = pipeline(issue=customer_message)

    enrich_span(
        metrics={"steps_completed": 2},
    )

    return {
        "priority": triage_result.priority,
        "category": triage_result.category,
        "resolution": resolution.resolution,
    }


ticket = handle_support_ticket(
    order_id="ORD-1003",
    customer_message="My order has been delayed for over a week. I need a refund.",
)
print(f"Priority: {ticket['priority']}, Category: {ticket['category']}")
print(f"Resolution: {ticket['resolution']}")
The @trace decorator and enrich_span() give you:
  • A parent span for the full business workflow
  • Custom metadata (order ID, priority, category) attached to the span
  • Custom metrics (steps completed) for monitoring
  • DSPy module calls automatically nested as child spans

Troubleshooting

Traces not appearing

  1. Call .instrument() before running any DSPy code — Instrumentation must be active before module execution:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.dspy import DSPyInstrumentor

tracer = HoneyHiveTracer.init(project="your-project")

# Instrument before running DSPy modules
DSPyInstrumentor().instrument(tracer_provider=tracer.provider)

# Now run your DSPy code
result = agent(question="Hello")
  1. Check environment variables — Ensure HH_API_KEY and HH_PROJECT are set
  2. Add the OpenAI instrumentor — DSPy uses LiteLLM/OpenAI under the hood. Adding the OpenAI instrumentor captures detailed LLM-level spans:
from openinference.instrumentation.openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)
  1. Clean up instrumentors on exit — Call .uninstrument() to avoid duplicate spans in long-running processes:
dspy_instrumentor = DSPyInstrumentor()
dspy_instrumentor.instrument(tracer_provider=tracer.provider)

try:
    # Your DSPy code
    pass
finally:
    tracer.force_flush()
    dspy_instrumentor.uninstrument()

Enrich Your Traces

Add user IDs and custom metadata to DSPy traces

Custom Spans

Create spans for business logic around module calls

Distributed Tracing

Trace pipelines across service boundaries

Query Trace Data

Export traces programmatically

Resources