Initializing HoneyHive Tracer

Use the following code to initialize HoneyHive tracing in your project:

For Python projects, use the HoneyHiveTracer class to initialize tracing:

from honeyhive import HoneyHiveTracer
import os

HoneyHiveTracer.init(api_key=os.environ["HH_API_KEY"], project=os.environ["HH_PROJECT"])

This initializes auto-tracing for your entire Python application.

If you’re using these code examples verbatim, then make sure to set the appropriate environment variables (HH_API_KEY, HH_PROJECT, and for TypeScript, HH_SESSION_NAME) before running your application.

Supported LangChain Versions/Interfaces

  • Python: Compatible with LangChain versions ^0.2.0 and above.
  • JavaScript: Compatible with LangChain versions ^0.2.0 and above.

For the most up-to-date compatibility information, please refer to the HoneyHive documentation.

Nesting

Nesting is handled automatically by the HoneyHive tracing system. When you use traced components within other traced components, the system will create a hierarchical structure of spans, reflecting the nested nature of your LangChain operations.

Enriching Properties

For information on how to enrich your traces and spans with additional context, see our enrichment documentation.

Adding Evaluators

Once traces have been logged in the HoneyHive platform, you can then run evaluations with either Python or TypeScript.

Cookbook Examples

Python Example

import os
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from honeyhive import HoneyHiveTracer

HoneyHiveTracer.init(api_key=os.environ["HH_API_KEY"], project=os.environ["HH_PROJECT"])

# Load the document
loader = TextLoader('state_of_the_union.txt')
documents = loader.load()

# Split the document into chunks
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
docs = text_splitter.split_documents(documents)

# Create embeddings
embeddings = OpenAIEmbeddings()

# Create a FAISS vector store from the documents
vectorstore = FAISS.from_documents(docs, embeddings)

# Create a retriever interface
retriever = vectorstore.as_retriever()

# Initialize the OpenAI LLM
llm = OpenAI(temperature=0)

# Create a RetrievalQA chain
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=retriever
)

# Ask a question
query = "What did the president say about Ketanji Brown Jackson?"
result = qa_chain.run(query)

print(result)

JavaScript Example

import * as fs from 'fs';
import { OpenAI } from "@langchain/openai";
import { TextLoader } from 'langchain/document_loaders/fs/text';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { OpenAIEmbeddings } from "@langchain/openai";
import { FaissStore } from "@langchain/community/vectorstores/faiss";
import { RetrievalQAChain } from 'langchain/chains';
import { HoneyHiveLangChainTracer } from 'honeyhive';

async function runQA() {
  const tracer = new HoneyHiveLangChainTracer({
    project: process.env.HH_PROJECT,
    sessionName: process.env.HH_SESSION_NAME,
    apiKey: process.env.HH_API_KEY,
  });

  // Load the document with tracing
  const loader = new TextLoader('state_of_the_union.txt', {
    callbacks: [tracer],
  });
  const documents = await loader.load();

  // Split the document into chunks with tracing
  const textSplitter = new RecursiveCharacterTextSplitter({
    chunkSize: 1000,
    chunkOverlap: 200,
    callbacks: [tracer],
  });
  const docs = await textSplitter.splitDocuments(documents);

  // Create embeddings with tracing
  const embeddings = new OpenAIEmbeddings({
    callbacks: [tracer],
  });

  // Create a FAISS vector store from the documents with tracing
  const vectorStore = await FaissStore.fromDocuments(docs, embeddings, {
    callbacks: [tracer],
  });

  // Create a retriever interface with tracing
  const retriever = vectorStore.asRetriever({
    callbacks: [tracer],
  });

  // Initialize the OpenAI LLM with tracing
  const llm = new OpenAI({
    temperature: 0,
    callbacks: [tracer],
  });

  // Create a RetrievalQA chain with tracing
  const qaChain = RetrievalQAChain.fromLLM(llm, retriever, {
    callbacks: [tracer],
  });

  // Ask a question
  const query = "What did the president say about Ketanji Brown Jackson?";
  const res = await qaChain.call({ query, callbacks: [tracer] });

  console.log(res.text);
}

runQA();

These examples demonstrate how to integrate HoneyHive tracing with LangChain in both Python and TypeScript environments, covering document loading, text splitting, embedding creation, vector store operations, and question-answering chains.