Initializing HoneyHive Tracer

Use the following code to initialize HoneyHive tracing in your project:

For Python projects, use the HoneyHiveTracer class to initialize tracing:

from honeyhive import HoneyHiveTracer
import os

HoneyHiveTracer.init(api_key=os.environ["HH_API_KEY"], project=os.environ["HH_PROJECT"])

This initializes auto-tracing for your entire Python application.

If you’re using these code examples verbatim, then make sure to set the appropriate environment variables (HH_API_KEY and HH_PROJECT) before running your application.

Supported LlamaIndex Versions/Interfaces

Compatible with LlamaIndex versions ^0.10.0 and above.

For the most up-to-date compatibility information, please refer to the HoneyHive documentation.

Nesting

Nesting is handled automatically by the HoneyHive tracing system. When you use traced components within other traced components, the system will create a hierarchical structure of spans, reflecting the nested nature of your LlamaIndex operations.

Enriching Properties

For information on how to enrich your traces and spans with additional context, see our enrichment documentation.

Adding Evaluators

Once traces have been logged in the HoneyHive platform, you can then run evaluations with either Python or TypeScript.

Cookbook Examples

Python Example

import os
from llama_index.core import (
    GPTVectorStoreIndex,
    SimpleDirectoryReader,
    Settings
)
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from honeyhive import HoneyHiveTracer

# Initialize HoneyHiveTracer
HoneyHiveTracer.init(api_key=os.environ["HH_API_KEY"], project=os.environ["HH_PROJECT"])

# Load the document
documents = SimpleDirectoryReader(input_files=['state_of_the_union.txt']).load_data()

# Initialize the OpenAI LLM using LlamaIndex's OpenAI wrapper
llm = OpenAI(temperature=0)

# Create the embedding model
embedding_model = OpenAIEmbedding()

# Add the LLM predictor and embedding model to the Settings object
Settings.llm = llm
Settings.embed_model = embedding_model

# Create a vector index from the documents
index = GPTVectorStoreIndex.from_documents(
    documents,
)

# Ask a question
query = "What did the president say about Ketanji Brown Jackson?"
retriever = VectorIndexRetriever(index=index)
query_engine = RetrieverQueryEngine.from_args(retriever)
response = query_engine.query(query)

print(response)

These examples demonstrate how to integrate HoneyHive tracing with LlamaIndex in Python environments, covering document loading, text splitting, embedding creation, vector store operations, and question-answering chains.