CrewAI is a multi-agent framework for orchestrating crews, tasks, tools, and manager-driven delegation.
HoneyHive integrates with CrewAI through OpenInference instrumentors. Crew orchestration spans come from CrewAIInstrumentor. Model spans come from the provider client that CrewAI actually calls underneath. This page uses OpenAI-backed CrewAI flows, so it layers OpenAIInstrumentor on top of CrewAIInstrumentor.
Quick Start
Recommended setup. Initialize HoneyHive, instrument CrewAI for orchestration spans, then instrument the model provider your CrewAI app actually uses.
The examples on this page use openai/gpt-4o-mini, so they use OpenAIInstrumentor. If your CrewAI app uses another provider, use the matching provider instrumentor when one exists. Do not assume LiteLLM is always the active path.
uv pip install "honeyhive>=1.0.0rc0" crewai openinference-instrumentation-crewai openinference-instrumentation-openai
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.crewai import CrewAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project=os.getenv("HH_PROJECT"),
server_url=os.getenv("HH_API_URL"),
)
CrewAIInstrumentor().instrument(tracer_provider=tracer.provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)
# Your existing CrewAI code works unchanged
Compatibility
| Requirement | Version |
|---|
| Python | 3.11+ |
| crewai | 1.10.1 |
Required Environment Variables
HH_API_KEY - Your HoneyHive API key
HH_PROJECT - Your HoneyHive project name
OPENAI_API_KEY - Required for the examples on this page
HH_API_URL - Optional override for non-production HoneyHive environments
If you use a different model provider with CrewAI, set that provider’s credentials and instrument that provider’s client when an openinference instrumentor exists.
What Gets Traced
This setup captures:
- Crew runs - Crew kickoff spans and multi-step execution
- Agent activity - Agent roles, prompts, outputs, and handoffs
- Model requests - OpenAI-backed LLM calls with prompt and response payloads
- Tool usage - Tool-call arguments and results from the example flow
No manual @trace decorators are required for the standard CrewAI path.
Known limitation: with the current CrewAI + OpenInference integration, custom CrewAI function tools do not yet appear as separate standalone HoneyHive tool events. You still see tool usage in model/tool-call payloads.
import os
from crewai import Agent, Crew, Process, Task
from crewai.tools import tool
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.crewai import CrewAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
MODEL = "openai/gpt-4o-mini"
@tool("OrderStatusLookup")
def lookup_order_status(order_id: str) -> str:
"""Look up the current status and ETA for a customer order."""
statuses = {
"ORD-1001": {"state": "shipped", "eta_days": 2},
"ORD-1002": {"state": "processing", "eta_days": 5},
"ORD-1003": {"state": "delayed", "eta_days": 8},
}
status = statuses.get(order_id.upper())
if not status:
return f"Order {order_id.upper()}: not found in the system."
return (
f"Order {order_id.upper()}: {status['state']}, "
f"estimated delivery in {status['eta_days']} days."
)
@tool("PolicyLookup")
def lookup_policy(topic: str) -> str:
"""Look up support policy by topic: refund, cancellation, or shipping."""
policies = {
"refund": "Refunds are available within 30 days for undelivered or damaged items.",
"cancellation": "Cancellation is allowed before shipment. Delayed orders can request assisted cancellation.",
"shipping": "Delays beyond 7 days trigger proactive support outreach.",
}
return policies.get(topic.strip().lower(), "No policy found.")
tracer = HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project=os.getenv("HH_PROJECT"),
server_url=os.getenv("HH_API_URL"),
)
CrewAIInstrumentor().instrument(tracer_provider=tracer.provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer.provider)
support_generalist = Agent(
role="Support Generalist",
goal="Resolve order and policy questions using the available tools",
backstory=(
"You are a customer support generalist. Use tools for order status and "
"policy questions, then reply with short, customer-friendly answers."
),
tools=[lookup_order_status, lookup_policy],
llm=MODEL,
verbose=False,
)
task = Task(
description=(
"For delayed order ORD-1003, explain the cancellation policy and "
"recommended next steps."
),
expected_output=(
"A concise support response that uses tools when needed and includes "
"the final customer-facing answer."
),
agent=support_generalist,
)
crew = Crew(
agents=[support_generalist],
tasks=[task],
process=Process.sequential,
verbose=False,
)
print(crew.kickoff())
For trace enrichment after setup, see Enriching Traces.