Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.honeyhive.ai/llms.txt

Use this file to discover all available pages before exploring further.

HoneyHive replaces Langfuse with a purpose-built observability platform for production AI. This guide covers both live instrumentation migration (updating your code) and the data field mapping for migrating historical Langfuse traces.

Why teams switch to HoneyHive

Built for agentic AI

Multi-agent workflows, tool orchestration, and reasoning traces - HoneyHive handles the complexity that Langfuse was not designed for. Get per-span latency, per-agent costs, and full reasoning paths out of the box.

Zero-dependency SDK (BYOI)

Bring Your Own Instrumentor. Use any version of openai, anthropic, or langchain with no SDK conflicts. HoneyHive never patches your clients.

Enterprise-grade security

SOC 2, SSO, and RBAC built in. Dedicated cloud and self-hosted deployment options for regulated industries.

Production monitoring

Real-time alerts, anomaly detection, and custom dashboards purpose-built for AI workloads, not retrofitted from generic APM.
CapabilityLangfuseHoneyHive
Agentic trace depthBasic trace treesPer-span latency, per-agent cost, reasoning paths
SDK dependency modelBundled deps, client patchingZero dependencies (BYOI)
Security and complianceCommunity-focusedSOC 2, SSO, RBAC
Production monitoringBasic dashboardsReal-time alerts, anomaly detection
EvaluationsClient-side scoring onlyServer-side evaluators (auto-run on all traces) + client-side
Auto-flushManual flush() requiredAutomatic

Migration overview

1

Install HoneyHive SDK

Add HoneyHive alongside Langfuse with no conflicts.
2

Configure API keys

Set up your HoneyHive project and credentials.
3

Update tracing code

Replace Langfuse decorators and callbacks with HoneyHive equivalents.
4

Migrate evaluations

Move scoring logic to HoneyHive evaluators.
5

Validate and remove Langfuse

Confirm traces appear correctly, then uninstall Langfuse.

Step 1: Install HoneyHive SDK

HoneyHive installs cleanly alongside your existing stack:
# Your existing Langfuse setup
pip install langfuse openai==1.40.0

# Add HoneyHive (no conflicts)
pip install honeyhive

# Verify no dependency conflicts
pip check
Run pip list | grep -E "honeyhive|langfuse|openai" to verify all packages coexist.

Step 2: Configure API keys

Get your HoneyHive API key

  1. Log in to HoneyHive
  2. Go to Settings > API Keys
  3. Click Create New Key and copy it

Set environment variables

# HoneyHive
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project-name"

# Keep Langfuse active during migration (optional)
export LANGFUSE_SECRET_KEY="your-langfuse-secret"
export LANGFUSE_PUBLIC_KEY="your-langfuse-public"

Step 3: Update tracing code

Basic decorator tracing

from langfuse.decorators import observe, langfuse_context
from openai import OpenAI

client = OpenAI()

@observe()
def generate_response(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

# Must flush manually
langfuse_context.flush()

Nested spans with metadata

from langfuse.decorators import observe, langfuse_context

@observe()
def process_document(doc: str):
    langfuse_context.update_current_observation(
        metadata={"doc_length": len(doc)}
    )
    
    with langfuse_context.observe(name="extract_entities") as span:
        entities = extract(doc)
        span.update(output={"entities": entities})
    
    with langfuse_context.observe(name="summarize") as span:
        summary = summarize(doc, entities)
        span.update(output={"summary": summary})
    
    return summary

OpenAI integration (BYOI pattern)

Langfuse patches the OpenAI client. HoneyHive uses the BYOI pattern instead: you choose the instrumentor, and your OpenAI client stays standard.
from langfuse import Langfuse
from langfuse.openai import openai  # Patched client

langfuse = Langfuse()

# Must use the patched client
response = openai.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
    name="greeting"  # Langfuse-specific param
)

langfuse.flush()
BYOI means freedom. Use any OpenAI SDK version, swap instrumentors without changing application code, and never worry about SDK conflicts.

Step 4: Migrate evaluations

Langfuse uses client-side scoring. HoneyHive supports both client-side enrichment and server-side evaluators that run automatically on every trace.
from langfuse import Langfuse

langfuse = Langfuse()

langfuse.score(
    trace_id="...",
    name="quality",
    value=0.8
)
Server-side evaluators run asynchronously on all traces. Configure them once in the HoneyHive dashboard and they apply to every session with no code changes required.

Step 5: Validate and remove Langfuse

Validate traces

  1. Run your application with typical workloads
  2. Compare traces side by side:
  3. Verify that spans, metadata, and timing match

Complete the switch

# Remove Langfuse env vars
unset LANGFUSE_SECRET_KEY
unset LANGFUSE_PUBLIC_KEY

# Uninstall Langfuse
pip uninstall langfuse
# Clean up Langfuse imports and calls:
# - from langfuse import ...
# - from langfuse.decorators import observe
# - langfuse_context.flush()
# - langfuse.score(...)
Confirm traces and evaluations appear correctly in HoneyHive after removing Langfuse.

API mapping quick reference

LangfuseHoneyHiveNotes
@observe()@trace()Function decorator for creating spans
langfuse_context.observe()@trace() (nested)Child spans via nested decorators
span.update(output=...)enrich_span(outputs=...)Set span output
langfuse_context.update_current_observation()enrich_span(metadata=...)Attach metadata
langfuse.score()enrich_span(metrics=...)Record metrics and scores
langfuse.flush()Not neededHoneyHive auto-flushes
from langfuse.openai import openaiOpenAI() + instrumentorBYOI pattern, no patching

Data migration field reference

When migrating historical Langfuse data to HoneyHive, each Langfuse object type maps to a HoneyHive equivalent. Use this reference alongside your migration script.

Object type mapping

Langfuse objectHoneyHive objectAPI endpoint
TraceSession/session/start
GenerationEvent (model)/events
SpanEvent (chain)/events
EventEvent (tool)/events
ScoreSession metadata/session/start

Event type mapping

Langfuse typeHoneyHive event_type
Tracesession
GENERATIONmodel
SPANchain
EVENTtool

Trace to Session

Langfuse traces become HoneyHive sessions.
Langfuse fieldHoneyHive fieldTransformation
(auto)session_idGenerated UUID
idmetadata.langfuse_trace_idDirect
idmetadata.langfuse_original_idDirect
namesession_nameDirect
userIduser_properties.user_idWrapped
sessionIdexternal_idDirect
inputinputsDirect
outputoutputsDirect
metadatametadataMerged
tagsmetadata.tagsMoved
versionmetadata.versionMoved
releasemetadata.releaseMoved
(auto)sourceSet to "langfuse"
(auto)event_typeSet to "session"
(auto)projectFrom config
(auto)metadata.is_sessionSet to true

Generation to Event (model)

Langfuse generations become HoneyHive model events. LLM-specific fields like model name, token counts, and cost are preserved.
Langfuse fieldHoneyHive fieldTransformation
(auto)event_idGenerated UUID
traceIdsession_idMapped via trace_id_to_session_id
idmetadata.langfuse_observation_idDirect
nameevent_nameDirect (default: "generation")
(auto)event_typeSet to "model"
modelconfig.modelMoved
modelParametersconfig.hyperparametersMoved
modelParameters.providerconfig.providerExtracted
inputinputs.messagesString converted to message array
promptinputs.messagesString converted to message array
outputoutputs.textString extraction
completionoutputs.textDirect
promptTokensusage.prompt_tokensRenamed
completionTokensusage.completion_tokensRenamed
totalTokensusage.total_tokensRenamed
calculatedTotalCostcostDirect
startTimestart_timeISO 8601 to Unix ms
endTimeend_timeISO 8601 to Unix ms
(auto)latency_msCalculated: end_time - start_time
(auto)durationSame as latency_ms
levelstatus"ERROR" becomes "error", else "success"
statusMessageerrorWhen level = "ERROR"
metadatametadataMerged
traceIdmetadata.langfuse_trace_idDirect
(auto)metadata.observation_typeSet to "GENERATION"
levelmetadata.levelDirect
(auto)sourceSet to "langfuse"
(auto)projectFrom config
Input transformation:
Langfuse input typeHoneyHive output
String{"messages": [{"role": "user", "content": "<string>"}]}
Array of messages{"messages": <array>}
Object with messagesDirect passthrough
null or missing{}
Output transformation:
Langfuse output typeHoneyHive output
String{"text": "<string>"}
ObjectDirect passthrough
null or missing{}

Span / Event to Event (chain or tool)

Langfuse spans become HoneyHive chain events; Langfuse events become tool events.
Langfuse fieldHoneyHive fieldTransformation
(auto)event_idGenerated UUID
traceIdsession_idMapped via trace_id_to_session_id
idmetadata.langfuse_observation_idDirect
nameevent_nameDirect (default: "span")
typeevent_type"SPAN" becomes "chain", "EVENT" becomes "tool"
inputinputsDirect
outputoutputsDirect
startTimestart_timeISO 8601 to Unix ms
endTimeend_timeISO 8601 to Unix ms
levelstatus"ERROR" becomes "error", else "success"
statusMessageerrorWhen level = "ERROR"
metadatametadataMerged
traceIdmetadata.langfuse_trace_idDirect
(auto)metadata.observation_typeSet to "SPAN" or "EVENT"
levelmetadata.levelDirect
(auto)sourceSet to "langfuse"
(auto)projectFrom config

Score to Session metadata

Langfuse scores are stored as session metadata. HoneyHive does not persist top-level feedback or metrics via /session/start, so scores are placed in the metadata object.
Langfuse fieldHoneyHive fieldTransformation
traceId(lookup)Used to find session_id via mapping
idmetadata.langfuse_scores[].idDirect
observationIdmetadata.langfuse_scores[].observation_idDirect
namemetadata.langfuse_scores[].nameDirect
valuemetadata.langfuse_scores[].valueDirect
sourcemetadata.langfuse_scores[].sourceDirect
commentmetadata.langfuse_scores[].commentDirect
traceIdmetadata.langfuse_scores[].trace_idDirect
Derived fields:
HoneyHive fieldDerivation
metadata.langfuse_score_feedbackObject with {score_name: {value, source}}
metadata.langfuse_score_metricsObject with {score_name: numeric_value}
metadata.langfuse_score_countCount of scores
metadata.has_feedbacktrue if scores exist

Data type transformations

Timestamps: ISO 8601 strings are converted to Unix milliseconds. Token fields (camelCase to snake_case):
LangfuseHoneyHive
promptTokensprompt_tokens
completionTokenscompletion_tokens
totalTokenstotal_tokens
Status mapping:
Langfuse levelHoneyHive status
"ERROR""error"
Any other value"success"

ID management

ID typeStrategy
session_idGenerated UUID
event_idGenerated UUID
Original Langfuse IDsPreserved in metadata fields
The migration script maintains an in-memory trace_id_to_session_id dictionary that maps each Langfuse trace_id to its generated HoneyHive session_id. This mapping links events to sessions and attaches scores to the correct sessions.

Known limitations

  • In-memory mapping: Scores can only attach to sessions migrated in the same run.
  • Separate pagination: Traces and scores use independent pagination. A trace on page 19 may have scores on page 39.
  • No native score storage: Scores are stored in metadata since HoneyHive does not persist feedback/metrics at the top level via /session/start.
Langfuse fieldReason
publicNo equivalent in HoneyHive
bookmarkedNo equivalent in HoneyHive
createdAt / updatedAtHoneyHive uses its own timestamps
Prompt templatesDifferent architecture
Dataset itemsDifferent architecture
Annotation queuesNot supported in migration script

Troubleshooting

Traces not appearing

Symptom: Application runs but no traces show in HoneyHive.
  1. Verify your API key is set:
    import os
    print(os.getenv("HH_API_KEY"))
    
  2. Check initialization order. HoneyHive must initialize before other imports:
    # Initialize HoneyHive FIRST
    from honeyhive import HoneyHiveTracer
    HoneyHiveTracer.init(api_key="...", project="...")
    
    # Then import other libraries
    from openai import OpenAI
    

Missing child spans

Symptom: Parent traces appear but nested spans are missing. Use nested @trace decorators:
@trace(event_name="parent")
def parent():
    child()

@trace(event_name="child")
def child():
    do_work()

Evaluation scores not syncing

Symptom: Langfuse scores do not appear in HoneyHive. Langfuse scores are not migrated automatically. Recreate them using one of these approaches:
  1. Simple metrics: enrich_span(metrics={...}) in your code
  2. LLM evaluations: Configure server-side evaluators in the HoneyHive dashboard
  3. Human review: Set up annotation queues in HoneyHive

Next steps

LLM evaluators

Set up LLM-as-judge evaluators for automated quality scoring

Annotation queues

Create human review workflows for expert evaluation

Alerts and monitoring

Configure production alerts for quality degradation

Custom dashboards

Build custom metrics dashboards
Self-hosting Langfuse? HoneyHive offers dedicated cloud and self-hosted options with enterprise support. Contact sales@honeyhive.ai for migration assistance.