Skip to main content
AutoGen AgentChat is Microsoft’s open-source framework for building multi-agent AI applications. It supports single agents with tools, multi-agent Swarms with handoffs, and model-based speaker selection via SelectorGroupChat. HoneyHive integrates with AutoGen through the OpenInference AutoGen AgentChat instrumentor, which captures agent-level spans including agent names, tool calls, handoffs, and model requests.
Add tracing in 3 lines of code. Initialize HoneyHiveTracer, call AutogenAgentChatInstrumentor().instrument(tracer_provider=tracer.provider), and all agent runs, tool calls, and handoffs are automatically traced.

Quick Start

To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive>=1.0.0rc0" autogen-agentchat autogen-ext[openai] \
    openinference-instrumentation-autogen-agentchat
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
instrumentor = AutogenAgentChatInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)

# Your AutoGen code works unchanged — traces appear automatically
Order matters. Call HoneyHiveTracer.init(...) before instrumentor.instrument(...), and instrument before creating agents.

What Gets Traced

The AutoGen AgentChat instrumentor captures:
  • Agent runs - Every agent.run() and team .run() call with inputs and outputs
  • LLM calls - Model requests via OpenAIChatCompletionClient.create with messages, responses, and token usage
  • Tool executions - Each tool call with arguments and results
  • Agent handoffs - Delegation between agents in Swarm and SelectorGroupChat teams

Example 1: Single Agent with Tools

A single agent handling customer support queries with tool calls and multi-turn conversation history.
import asyncio
import os

from honeyhive import HoneyHiveTracer
from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
instrumentor = AutogenAgentChatInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)


def lookup_order_status(order_id: str) -> dict:
    """Look up the current status of a customer order."""
    return {"order_id": order_id, "state": "shipped", "eta_days": 2}


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
    agent = AssistantAgent(
        name="support_agent",
        model_client=model_client,
        tools=[lookup_order_status],
        system_message="You are a customer support agent. Use tools to answer questions.",
    )

    # Turn 1
    await agent.run(task="Check order ORD-1002 status.")

    # Turn 2 - agent retains conversation history
    await agent.run(task="What about order ORD-1003?")

    await model_client.close()
    tracer.force_flush()

asyncio.run(main())

Example 2: Multi-Agent Swarm with Handoffs

A Swarm team where a triage agent delegates to specialists. Handoffs are automatically traced.
import asyncio
import os

from honeyhive import HoneyHiveTracer
from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import Swarm
from autogen_ext.models.openai import OpenAIChatCompletionClient

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
instrumentor = AutogenAgentChatInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)


def lookup_order_status(order_id: str) -> dict:
    """Look up the current status of a customer order."""
    return {"order_id": order_id, "state": "shipped", "eta_days": 2}


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

    order_specialist = AssistantAgent(
        name="order_specialist",
        model_client=model_client,
        tools=[lookup_order_status],
        handoffs=["triage_agent"],
        system_message="Check orders, then hand off back to triage_agent.",
        description="Handles order status questions.",
    )

    triage_agent = AssistantAgent(
        name="triage_agent",
        model_client=model_client,
        handoffs=["order_specialist"],
        system_message=(
            "Route order questions to order_specialist. "
            "Once resolved, summarize and say TERMINATE."
        ),
        description="Routes customer requests to specialists.",
    )

    team = Swarm(
        [triage_agent, order_specialist],
        termination_condition=TextMentionTermination("TERMINATE"),
    )

    await team.run(task="What's the status of order ORD-1001?")

    await model_client.close()
    tracer.force_flush()

asyncio.run(main())
In HoneyHive, you’ll see the full trace hierarchy: triage agent → handoff → order specialist → tool call → handoff back → final response.

Example 3: SelectorGroupChat

A team where a model selects the best agent to speak next, useful for complex queries requiring multiple specialists.
import asyncio
import os

from honeyhive import HoneyHiveTracer
from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import SelectorGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
instrumentor = AutogenAgentChatInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

    order_agent = AssistantAgent(
        name="order_agent",
        model_client=model_client,
        system_message="You check order statuses.",
        description="Looks up order status and delivery information.",
    )

    resolution_agent = AssistantAgent(
        name="resolution_agent",
        model_client=model_client,
        system_message="Synthesize information into a final response. Say TERMINATE when done.",
        description="Drafts the final customer response.",
    )

    team = SelectorGroupChat(
        [order_agent, resolution_agent],
        model_client=model_client,
        termination_condition=TextMentionTermination("TERMINATE"),
    )

    await team.run(task="Check order ORD-1003 and draft a response.")

    await model_client.close()
    tracer.force_flush()

asyncio.run(main())

Example 4: @trace Decorator for Custom Logic

Use the @trace decorator to create parent spans that wrap agent calls with your own business logic. This groups the agent invocation and any surrounding processing into a single span visible in HoneyHive.
import asyncio
import os

from honeyhive import HoneyHiveTracer, trace  
from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
instrumentor = AutogenAgentChatInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)


def lookup_order_status(order_id: str) -> dict:
    """Look up the current status of a customer order."""
    return {"order_id": order_id, "state": "delayed", "eta_days": 5}


@trace(event_type="chain", event_name="escalation_workflow", tracer=tracer)  
async def run_escalation_workflow(model_client):  
    """Custom business logic wrapped in a single traced span."""
    agent = AssistantAgent(
        name="escalation_agent",
        model_client=model_client,
        tools=[lookup_order_status],
        system_message=(
            "Check the order status and policy, then recommend "
            "whether to escalate. Be concise."
        ),
    )

    result = await agent.run(
        task="Order ORD-1003 is delayed and the customer is upset. "
        "Check status and recommend next steps."
    )

    # Post-processing inside the same traced span
    final_message = result.messages[-1].content if result.messages else ""
    needs_escalation = "escalat" in final_message.lower()
    return {"response": final_message, "escalated": needs_escalation}  


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

    outcome = await run_escalation_workflow(model_client)
    print(f"Escalated: {outcome['escalated']}")

    await model_client.close()
    tracer.force_flush()

asyncio.run(main())
In HoneyHive, the escalation_workflow span appears as a parent containing the agent run, LLM calls, and tool calls nested underneath.
Use @trace when you need a parent span that groups agent calls with pre/post-processing logic (validation, routing, response formatting). For simple agent runs, the instrumentor’s automatic spans are sufficient.

Troubleshooting

Traces not appearing

  • Ensure HoneyHiveTracer.init(...) is called before instrumentor.instrument(...).
  • Pass tracer_provider=tracer.provider to instrument() so AutoGen uses HoneyHive’s tracer provider.
  • Ensure HH_API_KEY, HH_PROJECT, and OPENAI_API_KEY are set.
  • Call tracer.force_flush() before your process exits to ensure all spans are exported.

Only seeing OpenAI spans, not agent spans

Make sure you’re using openinference-instrumentation-autogen-agentchat (not openinference-instrumentation-openai). The AutoGen-specific instrumentor captures agent-level spans (agent names, handoffs, tool calls) in addition to underlying model calls.

Enrich Traces

Add user IDs and custom metadata to your spans.

Custom Spans

Create spans for business logic around agent calls.

Distributed Tracing

Trace agents across service boundaries.

Query Trace Data

Export traces programmatically.

Resources