In the following example, we are going to walk through how to log your LangChain runs to HoneyHive for benchmarking and sharing. For a complete overview of LangChain tracing in HoneyHive, you can refer to our LangChain Tracing guide.

Get API key

After signing up on the app, you can find your API key in the Settings page under Account.

Install the SDK

We currently support a native Python SDK. For other languages, we encourage using HTTP request libraries to send requests.

pip install honeyhive -q

Trace your LangChain chains and agents

If you haven’t already done so, then the first thing you will need to do is create a HoneyHive project.

Once you have created a HoneyHive project, you can now start tracing your LangChain chain or agent.

  1. Initializing HoneyHive tracer: First, let’s start by initializing the HoneyHive tracer. See below.
import os
from honeyhive.sdk.langchain_tracer import HoneyHiveLangChainTracer


honeyhive_tracer = HoneyHiveLangChainTracer(
    project="AI Search Chatbot",     # necessary field: specify which project within HoneyHive
    name="SERP Q&A",                 # optional field: name of the chain/agent you are running
    source="staging",                # optional field: source (to separate production & staging environments)
    user_properties={                # optional field: specify user properties for whom this was ran
        "user_id": "sd8298bxjn0s",
        "user_account": "Acme",                                 
        "user_country": "United States",
        "user_subscriptiontier": "enterprise"
  1. Define Langchain chain or agent: Next, let’s initialize the OpenAI LLM and define tools for our Langchain agent. See below.
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, Wikipedia
from langchain.agents import Tool, initialize_agent
from import StructuredTool
from langchain.agents.react.base import DocstoreExplorer
from langchain.callbacks import StdOutCallbackHandler

# Initialise the OpenAI LLM and required callables for our tools
llm = OpenAI(
    temperature=0, openai_api_key=OPENAI_API_KEY
search = SerpAPIWrapper(
llm_math_chain = LLMMathChain.from_llm(llm=llm)
docstore = DocstoreExplorer(Wikipedia())

# Define the tools to be fed to the agent
tools = [
        description="Useful for when you need to answer questions about current events. You should ask targeted questions.",
        description="Useful for when you need factual information. Ask search terms for Wikipedia",
        description="Useful for when you need to answer questions about math.",
  1. Running LangChain agent with HoneyHive callback handler: Lastly, let’s run the Langchain agent. Here, you will need to define honeyhive_tracer as the callback handler.
# Initialise the agent with HoneyHive callback handler
agent = initialize_agent(tools=tools, llm=llm)
    "Which city is closest to London as the crow flies, Berlin or Munich?",

You can now view this trace from within the HoneyHive platform by clicking on Datasets in the sidebar and then Traces. Trace

Log user feedback for this session

Now that you’ve logged a request in HoneyHive, let’s try logging user feedback and ground truth labels associated with this session.

Using the session_id that is returned, you can send arbitrary feedback to HoneyHive using the feedback endpoint.

import honeyhive
    session_id = honeyhive_tracer.session_id,
    feedback = {
        "accepted": True,
        "saved": True,
        "regenerated": False,
        "edited": False