Add HoneyHive observability to your AutoGen AgentChat applications
AutoGen AgentChat is Microsoft’s open-source framework for building multi-agent AI applications. It supports single agents with tools, multi-agent Swarms with handoffs, and model-based speaker selection via SelectorGroupChat.HoneyHive integrates with AutoGen through the OpenInference AutoGen AgentChat instrumentor, which captures agent-level spans including agent names, tool calls, handoffs, and model requests.
Add tracing in 3 lines of code. Initialize HoneyHiveTracer, call AutogenAgentChatInstrumentor().instrument(tracer_provider=tracer.provider), and all agent runs, tool calls, and handoffs are automatically traced.
A Swarm team where a triage agent delegates to specialists. Handoffs are automatically traced.
Copy
Ask AI
import asyncioimport osfrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentorfrom autogen_agentchat.agents import AssistantAgentfrom autogen_agentchat.conditions import TextMentionTerminationfrom autogen_agentchat.teams import Swarmfrom autogen_ext.models.openai import OpenAIChatCompletionClienttracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)instrumentor = AutogenAgentChatInstrumentor()instrumentor.instrument(tracer_provider=tracer.provider)def lookup_order_status(order_id: str) -> dict: """Look up the current status of a customer order.""" return {"order_id": order_id, "state": "shipped", "eta_days": 2}async def main(): model_client = OpenAIChatCompletionClient(model="gpt-4o-mini") order_specialist = AssistantAgent( name="order_specialist", model_client=model_client, tools=[lookup_order_status], handoffs=["triage_agent"], system_message="Check orders, then hand off back to triage_agent.", description="Handles order status questions.", ) triage_agent = AssistantAgent( name="triage_agent", model_client=model_client, handoffs=["order_specialist"], system_message=( "Route order questions to order_specialist. " "Once resolved, summarize and say TERMINATE." ), description="Routes customer requests to specialists.", ) team = Swarm( [triage_agent, order_specialist], termination_condition=TextMentionTermination("TERMINATE"), ) await team.run(task="What's the status of order ORD-1001?") await model_client.close() tracer.force_flush()asyncio.run(main())
In HoneyHive, you’ll see the full trace hierarchy: triage agent → handoff → order specialist → tool call → handoff back → final response.
Use the @trace decorator to create parent spans that wrap agent calls with your own business logic. This groups the agent invocation and any surrounding processing into a single span visible in HoneyHive.
Copy
Ask AI
import asyncioimport osfrom honeyhive import HoneyHiveTracer, trace from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentorfrom autogen_agentchat.agents import AssistantAgentfrom autogen_ext.models.openai import OpenAIChatCompletionClienttracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"),)instrumentor = AutogenAgentChatInstrumentor()instrumentor.instrument(tracer_provider=tracer.provider)def lookup_order_status(order_id: str) -> dict: """Look up the current status of a customer order.""" return {"order_id": order_id, "state": "delayed", "eta_days": 5}@trace(event_type="chain", event_name="escalation_workflow", tracer=tracer) async def run_escalation_workflow(model_client): """Custom business logic wrapped in a single traced span.""" agent = AssistantAgent( name="escalation_agent", model_client=model_client, tools=[lookup_order_status], system_message=( "Check the order status and policy, then recommend " "whether to escalate. Be concise." ), ) result = await agent.run( task="Order ORD-1003 is delayed and the customer is upset. " "Check status and recommend next steps." ) # Post-processing inside the same traced span final_message = result.messages[-1].content if result.messages else "" needs_escalation = "escalat" in final_message.lower() return {"response": final_message, "escalated": needs_escalation} async def main(): model_client = OpenAIChatCompletionClient(model="gpt-4o-mini") outcome = await run_escalation_workflow(model_client) print(f"Escalated: {outcome['escalated']}") await model_client.close() tracer.force_flush()asyncio.run(main())
In HoneyHive, the escalation_workflow span appears as a parent containing the agent run, LLM calls, and tool calls nested underneath.
Use @trace when you need a parent span that groups agent calls with pre/post-processing logic (validation, routing, response formatting). For simple agent runs, the instrumentor’s automatic spans are sufficient.
Make sure you’re using openinference-instrumentation-autogen-agentchat (not openinference-instrumentation-openai). The AutoGen-specific instrumentor captures agent-level spans (agent names, handoffs, tool calls) in addition to underlying model calls.