Introduction
Quickstart
Get started with tracing sessions with HoneyHive
With HoneyHive, we allow users to track:
- Model inference calls as
model
events - External API calls (like retrieval) as
tool
events - Groups of inference & external API calls as
chains
events - An entire trace of requests as a
session
Logging a Trace
We use OpenTelemetry to automatically instrument your AI application.
Prerequisites
- You have already created a project in HoneyHive, as explained here.
- You have an API key for your project, as explained here.
Expected Time: 5 minutes
Steps
1
Installation
To install our SDKs, run the following commands in the shell.
2
Authenticate the SDK & initialize the tracer
To initialize the tracer, we require 3 key details:
- Your HoneyHive API Key
- Name of the project to log the trace to
- Name for this session - like “Chatbot Session” or “Customer RAG Session”.
View the trace
Now that you have successfully traced your session, you can review it in the platform.
- Navigate to the project in the platform via the projects page or the dropdown in the Header.
- Follow these steps after
Auto-instrumentation
We use OpenTelemetry along with a set of custom extensions that automatically instrument your LLM API and vector database requests in Python and Typescript.
Here’s our list of supported providers for auto-instrumentation:
LLM Providers | Vector DBs |
---|---|
OpenAI / Azure OpenAI | Chroma |
Anthropic | Pinecone |
Cohere | Qdrant |
Ollama | Weaviate |
Mistral AI | Milvus |
HuggingFace | |
Bedrock (AWS) | |
Replicate | |
Vertex AI (GCP) | |
IBM Watsonx AI | |
Together AI |