In the following example, we are going to walk through how to log your LlamaIndex runs to HoneyHive for benchmarking and sharing. For a complete overview of LlamaIndex tracing in HoneyHive, you can refer to our LlamaIndex Tracing guide.

Get API key

After signing up on the app, you can find your API key in the Settings page under Account.

Install the SDK

We currently support a native Python SDK. For other languages, we encourage using HTTP request libraries to send requests.

pip install honeyhive -q

Trace your LlamaIndex queries

If you haven’t already done so, then the first thing you will need to do is create a HoneyHive project.

Once you have created a HoneyHive project, you can now start tracing your LlamaIndex pipeline.

  1. Initializing HoneyHive tracer: First, let’s start by initializing the HoneyHive tracer. See below.
import honeyhive
import os
from honeyhive.utils.llamaindex_tracer import HoneyHiveLlamaIndexTracer


tracer = HoneyHiveLlamaIndexTracer(
    project="PG Q&A Bot",  # necessary field: specify which project within HoneyHive
    name="Paul Graham Q&A",  # optional field: name of the chain/agent you are running
    source="staging",  # optional field: source (to separate production & staging environments)
    user_properties={  # optional field: specify user properties for whom this was ran
        "user_id": "sd8298bxjn0s",
        "user_account": "Acme"                                 
        "user_country": "United States",
        "user_subscriptiontier": "enterprise"
  1. Defining LlamaIndex pipeline: Next, let’s define our LlamaIndex pipeline and initialize the service context with the HoneyHive tracer. See below.
from llama_index import VectorStoreIndex, SimpleWebPageReader, ServiceContext
from llama_index.callbacks import CallbackManager, LlamaDebugHandler
import openai

openai.api_key = "YOUR_OPENAI_API_KEY"

# Initialize the service context with the HoneyHive tracer
callback_manager = CallbackManager([tracer])
service_context = ServiceContext.from_defaults(callback_manager=callback_manager)

documents = SimpleWebPageReader(html_to_text=True).load_data(

# Pass the service_context to the index that you will query
index = VectorStoreIndex.from_documents(
    documents, service_context=service_context

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")

You can now view this trace from within the HoneyHive platform by clicking on Datasets in the sidebar and then Traces. Trace

Log user feedback for this session

Now that you’ve logged a request in HoneyHive, let’s try logging user feedback and ground truth labels associated with this session.

Using the session_id that is returned, you can send arbitrary feedback to HoneyHive using the feedback endpoint.
    session_id = tracer.session_id,
    feedback = {
        "accepted": True,
        "saved": True,
        "regenerated": False,
        "edited": False