Add HoneyHive observability to your CrewAI applications
CrewAI is a multi-agent framework for orchestrating crews, tasks, tools, and manager-driven delegation.HoneyHive integrates with CrewAI through OpenInference instrumentors. Crew orchestration spans come from CrewAIInstrumentor. Model spans come from the provider client that CrewAI actually calls underneath. This page uses OpenAI-backed CrewAI flows, so it layers OpenAIInstrumentor on top of CrewAIInstrumentor.
Recommended setup. Initialize HoneyHive, instrument CrewAI for orchestration spans, then instrument the model provider your CrewAI app actually uses.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
The examples on this page use openai/gpt-4o-mini, so they use OpenAIInstrumentor. If your CrewAI app uses another provider, use the matching provider instrumentor when one exists.
OPENAI_API_KEY - Required for the examples on this page
HH_API_URL - Optional override for non-production HoneyHive environments
If you use a different model provider with CrewAI, set that provider’s credentials and instrument that provider’s client when an openinference instrumentor exists.
Crew runs - Crew kickoff spans and multi-step execution
Agent activity - Agent roles, prompts, outputs, and handoffs
Model requests - OpenAI-backed LLM calls with prompt and response payloads
Tool usage - Tool-call arguments and results from the example flow
No manual @trace decorators are required for the standard CrewAI path.Known limitation: with the current CrewAI + OpenInference integration, custom CrewAI function tools do not yet appear as separate standalone HoneyHive tool events. You still see tool usage in model/tool-call payloads.
import osfrom crewai import Agent, Crew, Process, Taskfrom crewai.tools import toolfrom honeyhive import HoneyHiveTracerfrom openinference.instrumentation.crewai import CrewAIInstrumentorfrom openinference.instrumentation.openai import OpenAIInstrumentorMODEL = "openai/gpt-4o-mini"@tool("OrderStatusLookup")def lookup_order_status(order_id: str) -> str: """Look up the current status and ETA for a customer order.""" statuses = { "ORD-1001": {"state": "shipped", "eta_days": 2}, "ORD-1002": {"state": "processing", "eta_days": 5}, "ORD-1003": {"state": "delayed", "eta_days": 8}, } status = statuses.get(order_id.upper()) if not status: return f"Order {order_id.upper()}: not found in the system." return ( f"Order {order_id.upper()}: {status['state']}, " f"estimated delivery in {status['eta_days']} days." )@tool("PolicyLookup")def lookup_policy(topic: str) -> str: """Look up support policy by topic: refund, cancellation, or shipping.""" policies = { "refund": "Refunds are available within 30 days for undelivered or damaged items.", "cancellation": "Cancellation is allowed before shipment. Delayed orders can request assisted cancellation.", "shipping": "Delays beyond 7 days trigger proactive support outreach.", } return policies.get(topic.strip().lower(), "No policy found.")tracer = HoneyHiveTracer.init( api_key=os.getenv("HH_API_KEY"), project=os.getenv("HH_PROJECT"), server_url=os.getenv("HH_API_URL"),)CrewAIInstrumentor().instrument(tracer_provider=tracer.provider)OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)support_generalist = Agent( role="Support Generalist", goal="Resolve order and policy questions using the available tools", backstory=( "You are a customer support generalist. Use tools for order status and " "policy questions, then reply with short, customer-friendly answers." ), tools=[lookup_order_status, lookup_policy], llm=MODEL, verbose=False,)task = Task( description=( "For delayed order ORD-1003, explain the cancellation policy and " "recommended next steps." ), expected_output=( "A concise support response that uses tools when needed and includes " "the final customer-facing answer." ), agent=support_generalist,)crew = Crew( agents=[support_generalist], tasks=[task], process=Process.sequential, verbose=False,)print(crew.kickoff())
Instrument your provider - CrewAIInstrumentor captures orchestration spans; you also need the matching provider instrumentor for model input/output details
Check your model provider - Ensure the API key for your chosen model provider is set (e.g., OPENAI_API_KEY)