Skip to main content
OpenAI Agents SDK is a lightweight framework for building multi-agent workflows. It provides agents, handoffs, tools, guardrails, and session management out of the box. HoneyHive integrates with the Agents SDK via the OpenAIAgentsInstrumentor, automatically capturing agent runs, tool calls, handoffs, and LLM interactions.

Quick Start

Add HoneyHive tracing in just 3 lines of code. Initialize the tracer and instrumentor, and all agent runs, tool calls, handoffs, and model calls are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive>=1.0.0rc0" openai-agents openinference-instrumentation-openai-agents
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)

# Your existing Agents SDK code works unchanged
# Call tracer.force_flush() before your process exits

What Gets Traced

This setup captures:
  • Agent runs - Every agent invocation with inputs and outputs
  • LLM calls - Model requests, responses, and token usage
  • Tool calls - Each tool execution with arguments and results
  • Handoffs - Agent-to-agent delegation with routing context
  • Agents-as-tools - Nested agent delegation via agent.as_tool()
No manual @trace decorators are required for the standard Agents SDK path.
The openai-agents pip package exposes agents as its top-level module, so imports use from agents import ....All examples below assume the Quick Start setup above. The setup block is repeated in each snippet so they are individually copy-pasteable.

Example: Single Agent with Tools

import asyncio
import os

from agents import Agent, Runner, function_tool
from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

from honeyhive import HoneyHiveTracer

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)


@function_tool
def lookup_order_status(order_id: str) -> str:
    """Look up the current status and ETA for a customer order."""
    statuses = {
        "ORD-1001": {"state": "shipped", "eta_days": 2},
        "ORD-1002": {"state": "processing", "eta_days": 5},
        "ORD-1003": {"state": "delayed", "eta_days": 8},
    }
    status = statuses.get(order_id.upper())
    if status:
        return (
            f"Order {order_id.upper()}: {status['state']}, "
            f"estimated delivery in {status['eta_days']} days."
        )
    return f"Order {order_id.upper()}: not found in the system."


@function_tool
def lookup_policy(topic: str) -> str:
    """Look up support policy by topic: refund, cancellation, or shipping."""
    policies = {
        "refund": "Refunds are available within 30 days for undelivered or damaged items.",
        "cancellation": "Cancellation is allowed before shipment. Delayed orders can request assisted cancellation.",
        "shipping": "Delays beyond 7 days trigger proactive support outreach.",
    }
    return policies.get(topic.strip().lower(), "No policy found.")


agent = Agent(
    name="Support Generalist",
    instructions=(
        "You are a customer support generalist. Use tools for order status and "
        "policy questions, then reply with short, customer-friendly answers."
    ),
    tools=[lookup_order_status, lookup_policy],
)


async def main():
    await Runner.run(
        agent,
        "For delayed order ORD-1003, explain the cancellation policy and "
        "recommended next steps.",
    )
    tracer.force_flush()


asyncio.run(main())

Example: Multi-Agent Handoffs

The Agents SDK supports handoffs where a triage agent delegates to specialists:
import asyncio
import os

from agents import Agent, Runner, function_tool
from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

from honeyhive import HoneyHiveTracer

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)


@function_tool
def lookup_order_status(order_id: str) -> str:
    """Look up order status."""
    return f"Order {order_id}: shipped, ETA 2 days"


@function_tool
def lookup_policy(topic: str) -> str:
    """Look up support policy."""
    return f"Policy ({topic}): Refunds within 30 days for undelivered items."


order_specialist = Agent(
    name="Order Specialist",
    handoff_description="Handles shipment and delivery questions.",
    instructions="Use lookup_order_status for order questions. Be concise.",
    tools=[lookup_order_status],
)

policy_specialist = Agent(
    name="Policy Specialist",
    handoff_description="Handles refund, cancellation, and shipping policy questions.",
    instructions="Use lookup_policy for policy questions. Be concise.",
    tools=[lookup_policy],
)

triage_agent = Agent(
    name="Triage Agent",
    instructions=(
        "Route order questions to Order Specialist "
        "and policy questions to Policy Specialist."
    ),
    handoffs=[order_specialist, policy_specialist],
)


async def main():
    await Runner.run(triage_agent, "Where is my order ORD-1001?")
    tracer.force_flush()


asyncio.run(main())
In HoneyHive, you’ll see the full trace hierarchy: triage agent routing to the specialist, which then calls the tool and responds.

Example: Agents-as-Tools

For parallel execution, wrap specialist agents as tools using agent.as_tool():
import asyncio
import os

from agents import Agent, Runner, function_tool
from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

from honeyhive import HoneyHiveTracer

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)


@function_tool
def lookup_order_status(order_id: str) -> str:
    """Look up order status."""
    return f"Order {order_id}: delayed, ETA 8 days"


@function_tool
def lookup_policy(topic: str) -> str:
    """Look up support policy."""
    return f"Policy ({topic}): Cancellation allowed before shipment."


order_tool_agent = Agent(
    name="Order Lookup Agent",
    instructions="Use lookup_order_status to check orders.",
    tools=[lookup_order_status],
)

policy_tool_agent = Agent(
    name="Policy Lookup Agent",
    instructions="Use lookup_policy for policy questions.",
    tools=[lookup_policy],
)

coordinator = Agent(
    name="Support Coordinator",
    instructions=(
        "Use order_lookup and policy_lookup to gather information, "
        "then combine into a concise customer response."
    ),
    tools=[
        order_tool_agent.as_tool(
            tool_name="order_lookup",  # coordinator sees this agent as "order_lookup"
            tool_description="Look up order status information.",
        ),
        policy_tool_agent.as_tool(
            tool_name="policy_lookup",  # coordinator sees this agent as "policy_lookup"
            tool_description="Look up support policy information.",
        ),
    ],
)


async def main():
    await Runner.run(
        coordinator,
        "Order ORD-1003 is delayed. Can I cancel and get a refund?",
    )
    tracer.force_flush()


asyncio.run(main())
In HoneyHive, you’ll see the coordinator agent calling specialist sub-agents as tools, with nested LLM and tool spans.

Troubleshooting

Traces not appearing

  1. Check environment variables - Ensure HH_API_KEY and HH_PROJECT are set
  2. Pass the tracer provider - The instrumentor must receive tracer_provider=tracer.provider:
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

tracer = HoneyHiveTracer.init(project="your-project")

# Correct - pass tracer_provider
OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)

# Wrong - missing tracer_provider
OpenAIAgentsInstrumentor().instrument()
  1. Call force_flush() before exit - Ensure traces are exported before the process ends:
try:
    await Runner.run(agent, "Hello")
finally:
    tracer.force_flush()
  1. Check your model provider - Ensure OPENAI_API_KEY is set

Enrich Your Traces

Add user IDs and custom metadata to agent traces

Custom Spans

Create spans for business logic around agent calls

Distributed Tracing

Trace agents across service boundaries

Resources