Python example repo

TypeScript example repo

Observability is crucial for LLM applications due to their non-deterministic nature. HoneyHive provides LLM-native observability, allowing you to gain meaningful insights into your application throughout all stages of development - from prototyping to production.

In this tutorial, we’ll walk through the process of adding observability to a simple RAG (Retrieval-Augmented Generation) application using HoneyHive.

Feel free to copy-paste this tutorial as a prompt in Cursor or GitHub Copilot for auto-instrumenting your code.

Sample Application

Before we add observability, let’s look at a basic RAG application without any instrumentation.

This application does the following:

  1. Retrieves relevant documents from a Pinecone vector database based on a query’s embedding.
  2. Uses the retrieved documents as context to generate a response using OpenAI’s GPT model.
  3. Returns the generated response.

While this application works, it lacks observability. We can’t easily track performance, debug issues, or gather insights about its behavior. Let’s add HoneyHive observability to address these limitations.

Tutorial overview

The golden path for adding observability with HoneyHive happens in 2 phases.

Phase 1: Capture the data

  • Auto-capture LLM and vector DB calls
  • Group the calls into the logical steps in your application
  • Trace any other missing steps that might be relevant

Phase 2: Enrich the data

  • Track configuration and metadata on a step level
  • Track user properties, feedback, and configuration on an application level

Phase 1 - Capture the data

Prerequisites

For the following tutorial, we are assuming that you have

  • Already setup a HoneyHive account
  • Copied your HoneyHive API key

The tutorial should be easy to follow along even if you aren’t using OpenAI and Pinecone.

Setting Up Your Environment

First, install the HoneyHive SDK:

1. Auto-capture LLM and Vector DB Calls

At the beginning of your application, initialize the HoneyHive tracer:

HoneyHive automatically instruments calls to popular LLM providers and vector databases. For example, if you’re using OpenAI and Pinecone, your trace in the platform would look as follows.

In case you are unable to see the auto-captured calls, please refer to our troubleshooting docs. In any case, you can add custom spans as described in the next step to capture those calls.

This is great! Now we know exactly what our LLM and vector DB providers are receiving and responding with. This will help us in debugging API errors and understanding latencies.

However, such a trace structure is not easy to flip through. It might even be missing key steps.

For example, it’s hard to quickly find the user query and context chunks.

  1. The user query is all the way at the end of the LLM messages.
  2. The context chunks are all mixed together so we can’t tease those apart.

Next, we’ll introduce a few basic abstractions to capture these key variables and other missing steps in our application more cleanly.

2. Create a custom span around your main application

The @trace decorator in Python and traceFunction in TypeScript help us add custom spans for important functions in the application. It captures all function inputs and outputs as well as durations and other relevant properties.

We’ll start by placing the first decorator on the main RAG function.

By adding this high level span, we get a more readable trace structure that looks like:

The rag_pipeline/ragPipeline span is a lot easier to read and interpret.

We can see that the user query was What does the document talk about? and the final output is the (possibly?) correct description provided by the model.

This high-level view will help us catch any glaring semantic issues.

However, this is still not sufficient.

We still need access to some specific fields from the vector DB and LLM step that can break down how we arrived at this output.

Luckily, our decorator approach can easily scale to include any step as we please.

3. Create a custom span around key intermediate steps

First, let’s split our large RAG function into different sub-functions.

Any intermediate step whose inputs and outputs we want to track are good candidates for splitting out into their own functions.

You might have to sometimes pass a variable as an argument even if you don’t end up using it in the function, so that it can be tracked as inputs on the span in the platform.

In this case, we can separate a retriever and generator step to trace separately.

Now, let’s add the function decorator on the document retrieval and response generation steps.

The decorator will automatically pick up the function name, so we can easily discern which steps are calling our providers. It’ll also track latencies and, as we’ll see later, additional details like configuration and metadata.

By adding the lower level spans, we get a functional trace structure that looks like:

Wonderful!

Using the decorator, we can easily click through the documents fetched by get_relevant_documents and understand whether the LLM’s answer is sensible.

Our UI makes it easy to navigate extremely nested JSONs with large text to make debugging smoother.

Just by investigating these spans we can quickly debug whether our retriever or generation step is causing our overall application to fail.

Phase 2 - Enrich the data

For the next phase, let’s add in any other external context that’s available to us to the trace.

This will help us later when charting the data and understanding aggregate trends in usage and feedback.

4. Enrich the Custom Spans with Configuration and Metadata

The trace decorator accepts metadata and configuration to provide more context to the traces:

5. Enrich the trace with Feedback and Metadata

We can enrich the session by calling it from anywhere else in the code. For example, we’ll call our RAG pipeline function from another main function.

Using the set_feedback/setFeedback and set_metadata/setMetadata helper functions on our base tracer class, we will enrich the full session with the relevant external context as well.

After the above enrichments, we can see the user feedback, metadata and our other auto-aggregated properties appear on our session in the sideview:

Putting It All Together

Let’s combine all the concepts we’ve covered into a complete example of a RAG application with HoneyHive observability:

In this example:

  1. We set up the necessary environment variables and initialize the HoneyHive Tracer.
  2. We create clients for OpenAI and Pinecone, which will be automatically instrumented by HoneyHive.
  3. We split our main application function into three smaller traced functions:
    • get_relevant_documents/getRelevantDocuments: Retrieves relevant documents from Pinecone.
    • generate_response/generateResponse: Generates a response using OpenAI’s GPT model.
    • rag_pipeline/ragPipeline: Orchestrates the entire RAG process.
  4. In the main function, we:
    • Run the RAG pipeline with a sample query.
    • Print the query and response.
    • Simulate collecting user feedback and log it to HoneyHive.
  5. Throughout the code, we add metadata and custom spans to provide rich context for our traces.

This example demonstrates how HoneyHive provides comprehensive observability for your LLM application, allowing you to track and analyze every step of your RAG pipeline.

Best Practices

  1. Use descriptive names for function names: This makes it easier to understand the structure of your application in the traces.
  2. Add relevant metadata: Include information that will help you filter and analyze traces later, such as user IDs, experiment IDs, or version numbers.
  3. Collect user feedback: This provides valuable insights into the real-world performance of your application.
  4. Use nested spans: Structure your traces to reflect the hierarchy of your application’s components.

Conclusion

By following this tutorial, you’ve added comprehensive observability to your LLM application using HoneyHive. This will help you iterate quickly, identify issues, and improve the performance of your application throughout its lifecycle.

For more advanced features and in-depth guides, check out the following resources:

Next Steps

The next phase after capturing the right data from your application is setting up online evaluators and collecting datasets to measure quality in production.

The following guides will help you configure different types of evaluators for any step in your application.