Quickstart
Get started with tracing OpenAI calls with HoneyHive
We provide 4 ways for tracing your LLM application:
- Python Tracer
- TypeScript Tracer
- LangChain Python Tracer
- LlamaIndex Python Tracer
With our tracers, we allow customers to track
- Model inference calls as
model
events - External API calls (like retrieval) as
tool
events - Groups of inference & external API calls as
chains
events - An entire trace of requests as a
session
Tracing OpenAI requests
Prerequisites
- You have already created a project in HoneyHive, as explained here.
- You have an API key for your project, as explained here.
Expected Time: few minutes
Steps
Installation
To install our SDKs, run the following commands in the shell.
Authenticate the SDK & initialize the tracer
To initialize the tracer, we require 3 key details:
- Your HoneyHive API Key
- Name of the project to log the trace to
- Name for this session - like “Chatbot Session” or “Customer RAG Session”.
Please refer to the full spec for the SDKs here:
Trace your OpenAI call
Run your code & log the trace to HoneyHive
Your final code should look like this:
Now, you can run your code and the trace will be logged to HoneyHive.
View the trace
Now that you have successfully logged your OpenAI call, you can review it in the platform.
- Navigate to the project in the platform via the projects page or the dropdown in the Header.
- Follow these steps after
Next Steps
Refer to our detailed documentation for the custom tracers to understand how to trace different kinds of applications.
Detailed Tracing Guides
Python
Learn how to use HoneyHive’s Python Tracer.
Typescript
Learn how to use HoneyHive’s SessionTracer in Typescript.
Integrations with 3rd-party frameworks
LlamaIndex
Learn how HoneyHive’s LlamaIndex tracer works.
LangChain
Learn how HoneyHive’s LangChain tracer works.