Python example repoTypeScript example repoObservability is crucial for LLM applications due to their non-deterministic nature. HoneyHive provides LLM-native observability, allowing you to gain meaningful insights into your application throughout all stages of development - from prototyping to production.In this tutorial, we’ll walk through the process of adding observability to a simple RAG (Retrieval-Augmented Generation) application using HoneyHive.
Feel free to copy-paste this tutorial as a prompt in Cursor or GitHub Copilot for auto-instrumenting your code.
While this application works, it lacks observability. We can’t easily track performance, debug issues, or gather insights about its behavior. Let’s add HoneyHive observability to address these limitations.
At the beginning of your application, initialize the HoneyHive tracer:
Copy
Ask AI
# below your other imports# add an import for the auto-tracerfrom honeyhive import HoneyHiveTracer# Initialize the tracerHoneyHiveTracer.init( api_key="your-honeyhive-api-key", project="your-honeyhive-project-name", source="development", session_name="RAG Session")# The rest of the code remains the same as the sample application
HoneyHive automatically instruments calls to popular LLM providers and vector databases. For example, if you’re using OpenAI and Pinecone, your trace in the platform would look as follows.
In case you are unable to see the auto-captured calls, please refer to our troubleshooting docs. In any case, you can add custom spans as described in the next step to capture those calls.
This is great! Now we know exactly what our LLM and vector DB providers are receiving and responding with. This will help us in debugging API errors and understanding latencies.However, such a trace structure is not easy to flip through. It might even be missing key steps.For example, it’s hard to quickly find the user query and context chunks.
The user query is all the way at the end of the LLM messages.
The context chunks are all mixed together so we can’t tease those apart.
Next, we’ll introduce a few basic abstractions to capture these key variables and other missing steps in our application more cleanly.
2. Create a custom span around your main application
The @trace decorator in Python and traceFunction in TypeScript help us add custom spans for important functions in the application. It captures all function inputs and outputs as well as durations and other relevant properties.We’ll start by placing the first decorator on the main RAG function.
Copy
Ask AI
# in the imports, add an import for `trace` as followsfrom honeyhive import trace# add a decorator on your main application function@tracedef rag_pipeline(query): # ... no changes inside# logic elsewhere remains the same
By adding this high level span, we get a more readable trace structure that looks like:
The rag_pipeline/ragPipeline span is a lot easier to read and interpret.We can see that the user query was What does the document talk about? and the final output is the (possibly?) correct description provided by the model.This high-level view will help us catch any glaring semantic issues.However, this is still not sufficient.We still need access to some specific fields from the vector DB and LLM step that can break down how we arrived at this output.Luckily, our decorator approach can easily scale to include any step as we please.
3. Create a custom span around key intermediate steps
First, let’s split our large RAG function into different sub-functions.Any intermediate step whose inputs and outputs we want to track are good candidates for splitting out into their own functions.
You might have to sometimes pass a variable as an argument even if you don’t end up using it in the function, so that it can be tracked as inputs on the span in the platform.
In this case, we can separate a retriever and generator step to trace separately.
Copy
Ask AI
# logic above remains the samedef embed_query(query): res = openai_client.embeddings.create( model="text-embedding-ada-002", input=query ) query_vector = res.data[0].embedding return query_vectordef get_relevant_documents(query): vector_query = embed_query(query) results = index.query(vector=vector_query, top_k=3) return [result.metadata["text"] for result in results.matches]def generate_response(context, query): prompt = f"Context: {context}\n\nQuestion: {query}\n\nAnswer:" response = openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] ) return response.choices[0].message.content@tracedef rag_pipeline(): query = "What does this document talk about?" docs = get_relevant_documents(query) context = "\n".join(docs) response = generate_response(context, query) print(f"Query: {query}") print(f"Response: {response}")# logic below remains the same
Now, let’s add the function decorator on the document retrieval and response generation steps.The decorator will automatically pick up the function name, so we can easily discern which steps are calling our providers. It’ll also track latencies and, as we’ll see later, additional details like configuration and metadata.
Copy
Ask AI
# logic above remains the same# add a decorator on the key intermediate functions@tracedef get_relevant_documents(query): results = index.query(vector=query, top_k=3) return [result.metadata["text"] for result in results.matches]# add a decorator on the key intermediate functions @tracedef generate_response(context, query): prompt = f"Context: {context}\n\nQuestion: {query}\n\nAnswer:" response = openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] ) return response.choices[0].message.content# logic below remains the same
By adding the lower level spans, we get a functional trace structure that looks like:
Wonderful!Using the decorator, we can easily click through the documents fetched by get_relevant_documents and understand whether the LLM’s answer is sensible.Our UI makes it easy to navigate extremely nested JSONs with large text to make debugging smoother.Just by investigating these spans we can quickly debug whether our retriever or generation step is causing our overall application to fail.
For the next phase, let’s add in any other external context that’s available to us to the trace.This will help us later when charting the data and understanding aggregate trends in usage and feedback.
We can enrich the session by calling it from anywhere else in the code. For example, we’ll call our RAG pipeline function from another main function.Using the enrich_session/enrichSession helper functions on our base tracer class, we will enrich the full session with the relevant external context as well.
Copy
Ask AI
# logic above remains the samedef main(): query = "What is the capital of France?" response = rag_pipeline(query) print(f"Query: {query}") print(f"Response: {response}") # Setting metadata on the session # Simulate getting user feedback user_rating = 4 HoneyHiveTracer.enrich_session( feedback={ "rating": user_rating, "comment": "The response was accurate and helpful." }, metadata={ "experiment-id": 123 } )if __name__ == "__main__": main()
After the above enrichments, we can see the user feedback, metadata and our other auto-aggregated properties appear on our session in the sideview:
Let’s combine all the concepts we’ve covered into a complete example of a RAG application with HoneyHive observability:
Copy
Ask AI
import osfrom openai import OpenAIfrom pinecone import Pineconefrom honeyhive import HoneyHiveTracer, trace# Set up environment variablesos.environ["OPENAI_API_KEY"] = "your-openai-api-key"os.environ["PINECONE_API_KEY"] = "your-pinecone-api-key"# Initialize HoneyHive TracerHoneyHiveTracer.init( api_key="your-honeyhive-api-key", project="your-honeyhive-project-name", source="dev", session_name="RAG Session")# Initialize clientsopenai_client = OpenAI()pc = Pinecone()index = pc.Index("your-index-name")def embed_query(query): res = openai_client.embeddings.create( model="text-embedding-ada-002", input=query ) query_vector = res.data[0].embedding return query_vector# Decorate the intermediate steps@trace( config={ "embedding_model": "text-embedding-ada-002", "top_k": 3 })def get_relevant_documents(query): query_vector = embed_query(query) res = index.query(vector=query_vector, top_k=3, include_metadata=True) return [item['metadata']['_node_content'] for item in res['matches']]# Decorate the intermediate steps@trace( config={ "model": "gpt-4o", "prompt": "You are a helpful assistant" }, metadata={ "version": 1 })def generate_response(context, query): prompt = f"Context: {context}\n\nQuestion: {query}\n\nAnswer:" response = openai_client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] ) return response.choices[0].message.content# Decorate the main application logic@tracedef rag_pipeline(query): docs = get_relevant_documents(query) response = generate_response("\n".join(docs), query) return responsedef main(): query = "What does the document talk about?" response = rag_pipeline(query) print(f"Query: {query}") print(f"Response: {response}") # Set relevant metadata on the session level # Simulate getting user feedback user_rating = 4 HoneyHiveTracer.enrich_session( feedback={ "rating": user_rating, "comment": "The response was accurate and helpful." }, metadata={ "experiment-id": 123 } )if __name__ == "__main__": main()
In this example:
We set up the necessary environment variables and initialize the HoneyHive Tracer.
We create clients for OpenAI and Pinecone, which will be automatically instrumented by HoneyHive.
We split our main application function into three smaller traced functions:
get_relevant_documents/getRelevantDocuments: Retrieves relevant documents from Pinecone.
generate_response/generateResponse: Generates a response using OpenAI’s GPT model.
rag_pipeline/ragPipeline: Orchestrates the entire RAG process.
In the main function, we:
Run the RAG pipeline with a sample query.
Print the query and response.
Simulate collecting user feedback and log it to HoneyHive.
Throughout the code, we add metadata and custom spans to provide rich context for our traces.
This example demonstrates how HoneyHive provides comprehensive observability for your LLM application, allowing you to track and analyze every step of your RAG pipeline.
By following this tutorial, you’ve added comprehensive observability to your LLM application using HoneyHive. This will help you iterate quickly, identify issues, and improve the performance of your application throughout its lifecycle.For more advanced features and in-depth guides, check out the following resources:
The next phase after capturing the right data from your application is setting up online evaluators and collecting datasets to measure quality in production.The following guides will help you configure different types of evaluators for any step in your application.
Setup an online Python evaluator
Learn how to add a Python evaluator for specific steps or the whole application’s trace.
Setup an online LLM evaluator
Learn how to add a LLM evaluator for specific steps or the the whole application’s trace.
Setup human annotation
Configure human annotation for specific steps or the whole application’s trace.
Curate a dataset from traces
Learn how to curate a dataset of inputs & outputs from your traces