With HoneyHive, we allow users to track:

  1. Model inference calls as model events
  2. External API calls (like retrieval) as tool events
  3. Groups of inference & external API calls as chains events
  4. An entire trace of requests as a session

Logging a Trace

We use OpenTelemetry to automatically instrument your AI application.

Prerequisites

  • You have already created a project in HoneyHive, as explained here.
  • You have an API key for your project, as explained here.

Expected Time: 5 minutes

Steps

1

Installation

To install our SDKs, run the following commands in the shell.

pip install honeyhive
2

Authenticate the SDK & initialize the tracer

To initialize the tracer, we require 3 key details:

  1. Your HoneyHive API Key
  2. Name of the project to log the trace to
  3. Name for this session - like “Chatbot Session” or “Customer RAG Session”.
from honeyhive.tracer import HoneyHiveTracer

# place the code below at the beginning of your application execution
HoneyHiveTracer.init(
    api_key=MY_HONEYHIVE_API_KEY,
    project=MY_HONEYHIVE_PROJECT_NAME,
    source=MY_SOURCE, # e.g. "prod", "dev", etc.
    session_name=MY_SESSION_NAME,
)

# your session will now be automatically instrumented

View the trace

Now that you have successfully traced your session, you can review it in the platform.

  1. Navigate to the project in the platform via the projects page or the dropdown in the Header.
  2. Follow these steps after

Supported Providers

We use OpenTelemetry along with a set of custom extensions that automatically instrument your LLM API and vector database requests in Python and Typescript.

Here’s our list of supported providers for auto-instrumentation:

LLM ProvidersVector DBs
OpenAI / Azure OpenAIChroma
AnthropicPinecone
CohereQdrant
OllamaWeaviate
Mistral AIMilvus
HuggingFace
Bedrock (AWS)
Replicate
Vertex AI (GCP)
IBM Watsonx AI
Together AI

Learn more