We provide 4 ways for tracing your LLM application:

  1. Python Tracer
  2. TypeScript Tracer
  3. LangChain Python Tracer
  4. LlamaIndex Python Tracer
If your application runs in a different language, then please refer to our Tracing docs to understand how you can construct your API requests to our events endpoint to log data accurately.

With our tracers, we allow customers to track

  1. Model inference calls as model events
  2. External API calls (like retrieval) as tool events
  3. Groups of inference & external API calls as chains events
  4. An entire trace of requests as a session
To familiarize yourself with our data model and schema, read our in-depth Data Model Guide.

Tracing OpenAI requests

Prerequisites

  • You have already created a project in HoneyHive, as explained here.
  • You have an API key for your project, as explained here.

Expected Time: few minutes

Steps

1

Installation

To install our SDKs, run the following commands in the shell.

pip install honeyhive
2

Authenticate the SDK & initialize the tracer

To initialize the tracer, we require 3 key details:

  1. Your HoneyHive API Key
  2. Name of the project to log the trace to
  3. Name for this session - like “Chatbot Session” or “Customer RAG Session”.
from honeyhive.utils.tracer import HoneyHiveTracer

# place the code below at the beginning of your application execution
tracer = HoneyHiveTracer(
    project="PROJECT_NAME",
    source="dev",
    name="PIPELINE_NAME",
    api_key="HONEYHIVE_API_KEY"
)

Please refer to the full spec for the SDKs here:

3

Trace your OpenAI call

# wrap your OpenAI call inside `tracer.add_openai_chat`
with tracer.model(
    event_name="OpenAI Chat",
    input={ "chat_history": messages }
):
    openai_response = client.chat.completions.create(
        model="gpt-4-turbo",
        messages=messages,
    )
4

Run your code & log the trace to HoneyHive

Your final code should look like this:

from honeyhive.utils.tracer import HoneyHiveTracer

tracer = HoneyHiveTracer(
    project="PROJECT_NAME",
    source="dev",
    name="PIPELINE_NAME",
    api_key="HONEYHIVE_API_KEY"
)

with tracer.model(
    event_name="OpenAI Chat",
    input={ "chat_history": messages }
):
    openai_response = client.chat.completions.create(
        model="gpt-4-turbo",
        messages=messages,
    )

Now, you can run your code and the trace will be logged to HoneyHive.

View the trace

Now that you have successfully logged your OpenAI call, you can review it in the platform.

  1. Navigate to the project in the platform via the projects page or the dropdown in the Header.
  2. Follow these steps after

Next Steps

Refer to our detailed documentation for the custom tracers to understand how to trace different kinds of applications.

Detailed Tracing Guides

Integrations with 3rd-party frameworks

Learn more