Skip to main content

Which Pattern?

EnvironmentPatternWhy
AWS Lambda / Cloud FunctionsLazy InitReuse tracer across warm starts
FastAPI / Flask / DjangoGlobal TracerSingle instance, concurrent-safe
Kubernetes / DockerEnv ConfigSecrets via env vars

Serverless

Why: In serverless, the first request (“cold start”) initializes everything from scratch. Subsequent requests (“warm starts”) reuse the same container. Lazy initialization takes advantage of this - initialize the tracer once, reuse it across warm requests.
from honeyhive import HoneyHiveTracer, trace
import os
from typing import Optional

_tracer: Optional[HoneyHiveTracer] = None  # Survives warm starts

def get_tracer() -> HoneyHiveTracer:
    global _tracer
    if _tracer is None:
        _tracer = HoneyHiveTracer.init(
            api_key=os.getenv("HH_API_KEY"),
            project=os.getenv("HH_PROJECT"),
            source="lambda",
            disable_batch=True,  # Recommended for serverless
        )
    return _tracer

def lambda_handler(event, context):
    tracer = get_tracer()
    result = process_event(event)
    tracer.enrich_session(
        outputs={"result": result},
        metadata={"request_id": context.aws_request_id}
    )
    tracer.flush()  # No-op with disable_batch=True, but harmless safety net
    return result

@trace()
def process_event(event):
    get_tracer().enrich_span(metadata={"event_type": event.get("type")})
    return {"status": "success"}
Alternative: LRU cache achieves the same lazy initialization:
from functools import lru_cache

@lru_cache(maxsize=1)
def get_tracer():
    return HoneyHiveTracer.init(
        api_key=os.getenv("HH_API_KEY"),
        project=os.getenv("HH_PROJECT"),
        disable_batch=True,  # Recommended for serverless
    )

Server

Why: Web servers handle many concurrent requests. Initialize the tracer once when the app starts, then create a new session per request using create_session() (sync) or acreate_session() (async) so each request gets isolated traces.
For multi-turn conversations, custom session IDs, and scoped sessions, see Tracer Initialization.

FastAPI

from fastapi import FastAPI, Request
from honeyhive import HoneyHiveTracer, trace
import os

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project="my-api",
    source="production"
)

app = FastAPI()

@app.middleware("http")
async def session_middleware(request: Request, call_next):
    # Create isolated session per request (stored in baggage, not on tracer)
    session_id = await tracer.acreate_session(
        session_name=f"api-{request.url.path}",
        inputs={
            "method": request.method,
            "path": str(request.url.path),
            "user_id": request.headers.get("X-User-ID")
        }
    )
    response = await call_next(request)
    tracer.enrich_session(outputs={"status_code": response.status_code})
    return response

@app.post("/api/chat")
@trace(event_type="chain", tracer=tracer)
async def chat_endpoint(message: str):
    tracer.enrich_span(metadata={"message_length": len(message)})
    response = await process_message(message)
    return {"response": response}

@trace(event_type="tool", tracer=tracer)
async def process_message(message: str):
    # Nested spans automatically captured
    return message.upper()

Flask

from flask import Flask, request
from honeyhive import HoneyHiveTracer, trace
import os

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project="my-flask-app",
    source="production"
)

app = Flask(__name__)

@app.before_request
def create_session_for_request():
    tracer.create_session(
        session_name=f"flask-{request.path}",
        inputs={"method": request.method}
    )

@app.after_request
def enrich_session_after_request(response):
    tracer.enrich_session(outputs={"status_code": response.status_code})
    return response

@app.route("/api/process", methods=["POST"])
@trace(event_type="tool", tracer=tracer)
def process_endpoint():
    return {"result": "ok"}

Error Handling

Tracing should never crash your app. Handle missing config gracefully:
import os
import logging
from honeyhive import HoneyHiveTracer

logger = logging.getLogger(__name__)

def init_tracer():
    api_key = os.getenv("HH_API_KEY")
    if not api_key:
        logger.warning("HH_API_KEY not set, tracing disabled")
        return None
    try:
        return HoneyHiveTracer.init(
            api_key=api_key,
            project=os.getenv("HH_PROJECT", "my-app"),
            source=os.getenv("ENVIRONMENT", "production")
        )
    except Exception as e:
        logger.warning(f"Tracing init failed: {e}")
        return None

tracer = init_tracer()

def process_data(data):
    if tracer:
        tracer.enrich_span(metadata={"data_size": len(data)})
    return do_processing(data)

Environment Configuration

The Python SDK can be configured entirely through environment variables, which is the recommended approach for containerized and CI/CD deployments.
See the full Environment Variables Reference for all available variables, defaults, and aliases.

Kubernetes

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
      - name: app
        env:
        - name: HH_API_KEY
          valueFrom:
            secretKeyRef:
              name: honeyhive-secrets
              key: api-key
        - name: HH_PROJECT
          value: "my-app-prod"
        - name: HH_SOURCE
          value: "production"

Checklist

Before deploying:
  1. HH_API_KEY and HH_PROJECT environment variables set
  2. ✅ Tested with HH_API_KEY="" to verify graceful degradation
  3. ✅ Traces appearing in HoneyHive dashboard

What’s Next?

Tracer Initialization Patterns

Multi-turn sessions, scoped sessions, and patterns for serverless, web servers, and experiments

Trace Distributed Systems

Trace requests across service boundaries with context propagation
Questions? Join our Discord community or email support@honeyhive.ai