Add HoneyHive observability to your OpenAI Agents SDK applications
OpenAI Agents SDK is a lightweight framework for building multi-agent workflows. It provides agents, handoffs, tools, guardrails, and session management out of the box.HoneyHive integrates with the Agents SDK via the OpenAIAgentsInstrumentor, automatically capturing agent runs, tool calls, handoffs, and LLM interactions.
Add HoneyHive tracing in just 3 lines of code. Initialize the tracer and instrumentor, and all agent runs, tool calls, handoffs, and model calls are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
import osfrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentortracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)# Your existing Agents SDK code works unchanged# Call tracer.force_flush() before your process exits
Agent runs - Every agent invocation with inputs and outputs
LLM calls - Model requests, responses, and token usage
Tool calls - Each tool execution with arguments and results
Handoffs - Agent-to-agent delegation with routing context
Agents-as-tools - Nested agent delegation via agent.as_tool()
No manual @trace decorators are required for the standard Agents SDK path.
The openai-agents pip package exposes agents as its top-level module, so imports use from agents import ....All examples below assume the Quick Start setup above. The setup block is repeated in each snippet so they are individually copy-pasteable.
import asyncioimport osfrom agents import Agent, Runner, function_toolfrom openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentorfrom honeyhive import HoneyHiveTracertracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)@function_tooldef lookup_order_status(order_id: str) -> str: """Look up the current status and ETA for a customer order.""" statuses = { "ORD-1001": {"state": "shipped", "eta_days": 2}, "ORD-1002": {"state": "processing", "eta_days": 5}, "ORD-1003": {"state": "delayed", "eta_days": 8}, } status = statuses.get(order_id.upper()) if status: return ( f"Order {order_id.upper()}: {status['state']}, " f"estimated delivery in {status['eta_days']} days." ) return f"Order {order_id.upper()}: not found in the system."@function_tooldef lookup_policy(topic: str) -> str: """Look up support policy by topic: refund, cancellation, or shipping.""" policies = { "refund": "Refunds are available within 30 days for undelivered or damaged items.", "cancellation": "Cancellation is allowed before shipment. Delayed orders can request assisted cancellation.", "shipping": "Delays beyond 7 days trigger proactive support outreach.", } return policies.get(topic.strip().lower(), "No policy found.")agent = Agent( name="Support Generalist", instructions=( "You are a customer support generalist. Use tools for order status and " "policy questions, then reply with short, customer-friendly answers." ), tools=[lookup_order_status, lookup_policy],)async def main(): await Runner.run( agent, "For delayed order ORD-1003, explain the cancellation policy and " "recommended next steps.", ) tracer.force_flush()asyncio.run(main())
For parallel execution, wrap specialist agents as tools using agent.as_tool():
Copy
Ask AI
import asyncioimport osfrom agents import Agent, Runner, function_toolfrom openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentorfrom honeyhive import HoneyHiveTracertracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer.provider)@function_tooldef lookup_order_status(order_id: str) -> str: """Look up order status.""" return f"Order {order_id}: delayed, ETA 8 days"@function_tooldef lookup_policy(topic: str) -> str: """Look up support policy.""" return f"Policy ({topic}): Cancellation allowed before shipment."order_tool_agent = Agent( name="Order Lookup Agent", instructions="Use lookup_order_status to check orders.", tools=[lookup_order_status],)policy_tool_agent = Agent( name="Policy Lookup Agent", instructions="Use lookup_policy for policy questions.", tools=[lookup_policy],)coordinator = Agent( name="Support Coordinator", instructions=( "Use order_lookup and policy_lookup to gather information, " "then combine into a concise customer response." ), tools=[ order_tool_agent.as_tool( tool_name="order_lookup", # coordinator sees this agent as "order_lookup" tool_description="Look up order status information.", ), policy_tool_agent.as_tool( tool_name="policy_lookup", # coordinator sees this agent as "policy_lookup" tool_description="Look up support policy information.", ), ],)async def main(): await Runner.run( coordinator, "Order ORD-1003 is delayed. Can I cancel and get a refund?", ) tracer.force_flush()asyncio.run(main())
In HoneyHive, you’ll see the coordinator agent calling specialist sub-agents as tools, with nested LLM and tool spans.