Skip to main content
HoneyHive provides distributed tracing for AI applications, everything from RAG to multi-agent systems, giving you a hierarchical view of execution across agent invocations, LLM requests, tool calls, and handoffs.
Tree structure of a trace showing nested events

Getting Started

Automatic Tracing

The fastest way to start is automatic tracing, which instruments major LLM providers and vector databases with minimal setup using OpenTelemetry semantic conventions.
Using an agent framework? We currently document framework-specific setup for AWS Strands, Google ADK, and PydanticAI.
Deploying in Lambda, FastAPI, Flask, or Django? Read Tracer Initialization next to choose the right runtime setup pattern before you wire tracing into production.

Custom Spans

Automatic tracing covers LLM and vector DB calls. For any other function in your codebase (preprocessing, postprocessing, business logic), use the @trace() decorator to create custom spans that appear in your trace tree.

Custom Spans

Trace any function in Python/TS using decorators

Advanced Features