Skip to main content

Initializing HoneyHive Tracer

Use the following code to initialize HoneyHive tracing in your project:
Install the LangChain integration extra: pip install "honeyhive[openinference-langchain]>=1.0.0rc0" langchain-openai. The same package also traces LangGraph.
HoneyHive’s Python SDK uses the OpenInference LangChain instrumentor to trace all agents, chains, tools, and LLM calls automatically.
import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.langchain import LangChainInstrumentor

tracer = HoneyHiveTracer.init(
    api_key=os.environ["HH_API_KEY"],
    project=os.environ["HH_PROJECT"],
)
LangChainInstrumentor().instrument(tracer_provider=tracer.provider)

# Your existing LangChain code is now traced automatically.
See the modern LangChain integration guide for full examples, tested versions, and the Traceloop (OpenLLMetry) alternative.
If you’re using these code examples verbatim, then make sure to set the appropriate environment variables (HH_API_KEY, HH_PROJECT, and for TypeScript, HH_SESSION_NAME) before running your application.

Supported LangChain Versions

  • Python: LangChain >= 1.0.0 (tested LKGV 1.2.15), openinference-instrumentation-langchain 0.1.62. Requires Python 3.11+.
  • JavaScript/TypeScript: LangChain ^0.2.0 and above.
For the latest tested versions and compatibility details, see the modern LangChain integration guide.

Nesting

Nesting is handled automatically by the HoneyHive tracing system. When you use traced components within other traced components, the system will create a hierarchical structure of spans, reflecting the nested nature of your LangChain operations.

Enriching Properties

For information on how to enrich your traces and spans with additional context, see our enrichment documentation.

Adding Evaluators

Once traces have been logged in the HoneyHive platform, you can then run evaluations with either Python or TypeScript.

Cookbook Examples

Python Example

import os
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.langchain import LangChainInstrumentor
from langchain.agents import create_agent
from langchain_core.tools import tool

tracer = HoneyHiveTracer.init(
    api_key=os.environ["HH_API_KEY"],
    project=os.environ["HH_PROJECT"],
)
LangChainInstrumentor().instrument(tracer_provider=tracer.provider)

@tool
def calculator(expression: str) -> str:
    """Evaluate a basic arithmetic expression."""
    return str(eval(expression, {"__builtins__": {}}, {}))

@tool
def policy_lookup(topic: str) -> str:
    """Look up company policy on a topic."""
    policies = {
        "soc2": "SOC 2 covers security, availability, processing integrity, confidentiality, and privacy.",
        "retention": "Default retention is 30 days unless compliance requires longer.",
    }
    return policies.get(topic.lower(), "No policy found.")

agent = create_agent(
    model="openai:gpt-4o-mini",
    tools=[calculator, policy_lookup],
    system_prompt="You are a support assistant. Use tools when needed.",
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "What is 17 * 3 + 5? Also summarize our SOC2 policy."}]}
)
print(result["messages"][-1].content)

TypeScript Example

import * as fs from 'fs';
import { OpenAI } from "@langchain/openai";
import { TextLoader } from 'langchain/document_loaders/fs/text';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { OpenAIEmbeddings } from "@langchain/openai";
import { FaissStore } from "@langchain/community/vectorstores/faiss";
import { RetrievalQAChain } from 'langchain/chains';
import { HoneyHiveLangChainTracer } from 'honeyhive';

async function runQA(): Promise<void> {
  const tracer = new HoneyHiveLangChainTracer({
    project: process.env.HH_PROJECT,
    sessionName: process.env.HH_SESSION_NAME,
    apiKey: process.env.HH_API_KEY,
  });

  const tracerConfig = {
    callbacks: [tracer],
  };

  // Load the document with tracing
  const loader = new TextLoader('state_of_the_union.txt', tracerConfig);
  const documents = await loader.load();

  // Split the document into chunks with tracing
  const textSplitter = new RecursiveCharacterTextSplitter({
    chunkSize: 1000,
    chunkOverlap: 200,
    ...tracerConfig,
  });
  const docs = await textSplitter.splitDocuments(documents);

  // Create embeddings with tracing
  const embeddings = new OpenAIEmbeddings(tracerConfig);

  // Create a FAISS vector store from the documents with tracing
  const vectorStore = await FaissStore.fromDocuments(docs, embeddings, tracerConfig);

  // Create a retriever interface with tracing
  const retriever = vectorStore.asRetriever(tracerConfig);

  // Initialize the OpenAI LLM with tracing
  const llm = new OpenAI({
    temperature: 0,
    ...tracerConfig,
  });

  // Create a RetrievalQA chain with tracing
  const qaChain = RetrievalQAChain.fromLLM(llm, retriever, tracerConfig);

  // Ask a question
  const query = "What did the president say about Ketanji Brown Jackson?";
  const res = await qaChain.call({ query, ...tracerConfig });

  console.log(res.text);
}

runQA().catch(console.error);
These examples demonstrate how to integrate HoneyHive tracing with LangChain in both Python and TypeScript environments, covering document loading, text splitting, embedding creation, vector store operations, and question-answering chains.