Introduction

CrewAI is a framework for orchestrating role-playing autonomous AI agents. By integrating CrewAI with HoneyHive, you can trace and monitor your agent interactions, enabling better visibility, evaluation, and improvement of your agent workflows.

Prerequisites

  • A HoneyHive account
  • A CrewAI project
  • A HoneyHive API key
pip install crewai honeyhive python-dotenv openai

Initializing HoneyHive Tracer

Use the following code to initialize HoneyHive tracing in your CrewAI project:

from honeyhive import HoneyHiveTracer, trace
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize HoneyHive Tracer
HoneyHiveTracer.init(
    api_key=os.getenv("HONEYHIVE_API_KEY"),
    project=os.getenv("HONEYHIVE_PROJECT_NAME", "crewai-demo"),
    source=os.getenv("HONEYHIVE_SOURCE", "dev"),
    session_name="your-crewai-session-name"
)

This initializes auto-tracing for your CrewAI application. You can customize the session name to organize your traces logically.

Using the @trace Decorator

HoneyHive provides a @trace decorator that you can use to trace specific functions in your CrewAI workflow:

from honeyhive import trace

@trace
def create_agents():
    # Your agent creation logic here
    pass

@trace
def create_tasks(agents, research_topic):
    # Your task creation logic here
    pass

@trace
def run_crew(agents, tasks):
    # Your crew execution logic here
    pass

By decorating key functions with @trace, you can create a hierarchical trace structure that reflects your CrewAI workflow.

Required Environment Variables

Make sure to set the following environment variables before running your application:

  • HONEYHIVE_API_KEY: Your HoneyHive API key
  • HONEYHIVE_PROJECT_NAME: The name of your HoneyHive project (defaults to “crewai-demo” in the example)
  • HONEYHIVE_SOURCE: The source of your traces (defaults to “dev” in the example)

You can use a .env file and the python-dotenv package to manage these environment variables.

For the most up-to-date compatibility information, please refer to the HoneyHive documentation.

Enriching Properties

For information on how to enrich your traces and spans with additional context, see our enrichment documentation.

Adding Evaluators

Once traces have been logged in the HoneyHive platform, you can then run evaluations with Python or TypeScript.

Complete Example

Below is a complete example demonstrating how to integrate HoneyHive tracing with a CrewAI workflow:

import os
from typing import Dict, Any
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
from honeyhive import HoneyHiveTracer, trace

# Load environment variables
load_dotenv()

# Initialize HoneyHive Tracer
HoneyHiveTracer.init(
    api_key=os.getenv("HONEYHIVE_API_KEY"),
    project=os.getenv("HONEYHIVE_PROJECT_NAME", "crewai-demo"),
    source=os.getenv("HONEYHIVE_SOURCE", "dev"),
    session_name="crewai-research-crew"
)

@trace
def create_agents() -> Dict[str, Agent]:
    """Create and return a dictionary of agents with specific roles."""
    
    researcher = Agent(
        role="Research Analyst",
        goal="Conduct comprehensive research on the given topic",
        backstory="You're a senior research analyst with expertise in gathering and analyzing information from various sources.",
        verbose=True,
        allow_delegation=False,
    )
    
    writer = Agent(
        role="Content Writer",
        goal="Create well-structured, informative content based on research findings",
        backstory="You're an experienced content writer known for your ability to transform complex information into clear, engaging content.",
        verbose=True,
        allow_delegation=False,
    )
    
    return {"researcher": researcher, "writer": writer}

@trace
def create_tasks(agents: Dict[str, Agent], research_topic: str) -> Dict[str, Task]:
    """Create and return a dictionary of tasks for the agents."""
    
    research_task = Task(
        description=f"Research the following topic thoroughly: {research_topic}. Find key information, statistics, and expert opinions.",
        expected_output="A comprehensive research document with key findings, statistics, and expert opinions.",
        agent=agents["researcher"]
    )
    
    writing_task = Task(
        description=f"Using the research provided, create a well-structured article about {research_topic}.",
        expected_output="A well-structured, comprehensive article ready for publication.",
        agent=agents["writer"],
        context=[research_task]
    )
    
    return {"research_task": research_task, "writing_task": writing_task}

@trace
def run_crew(agents: Dict[str, Agent], tasks: Dict[str, Task]) -> str:
    """Create and run a crew with the given agents and tasks."""
    
    crew = Crew(
        agents=list(agents.values()),
        tasks=[tasks["research_task"], tasks["writing_task"]],
        process=Process.sequential,
        verbose=True
    )
    
    return crew.kickoff()

@trace
def main() -> None:
    """Main function to run the CrewAI demonstration with HoneyHive tracing."""
    
    # Define the research topic
    research_topic = "The impact of artificial intelligence on healthcare"
    
    # Create agents and tasks
    agents = create_agents()
    tasks = create_tasks(agents, research_topic)
    
    # Run the crew and get the result
    result = run_crew(agents, tasks)
    
    # Print the final result
    print("\n=== FINAL RESULT ===\n")
    print(result)

if __name__ == "__main__":
    main()

What Gets Traced

When you use HoneyHive with CrewAI, the following information is traced:

  • Agent Creation: Details about the agents’ roles, goals, and backstories
  • Task Creation: Task descriptions, expected outputs, and agent assignments
  • Crew Execution: The entire workflow of the crew, including all agent interactions
  • Function Calls: Any function decorated with @trace will have its inputs and outputs traced

This tracing allows you to:

  1. Visualize your CrewAI workflow in the HoneyHive dashboard
  2. Analyze agent performance and interactions
  3. Debug issues in your agent workflows
  4. Evaluate the quality of agent outputs
  5. Monitor the execution time of different components

Conclusion

By integrating HoneyHive with CrewAI, you gain powerful tracing and monitoring capabilities for your AI agent workflows. This enables you to build more robust, reliable, and effective multi-agent systems.

For more information on HoneyHive tracing, please refer to our tracing documentation.