Skip to main content
This document lists common issues and their solutions. If your issue isn’t listed, reach out on our Discord support channel.
Enable verbose logging for detailed debug output:
HoneyHiveTracer.init(
    api_key="...",
    project="...",
    verbose=True  # Enable debug logging
)

Common Errors Reference

ErrorCauseSolution
403 ForbiddenInvalid API key or TRACELOOP_API_KEY conflictCheck API key; remove TRACELOOP_API_KEY env var
401 UnauthorizedMissing or expired API keyVerify HONEYHIVE_API_KEY is set correctly
SSL Certificate ErrorCertificate validation failureSee SSL/Certificate Issues
Connection TimeoutNetwork issues or firewallSee Timeout Handling
Read TimeoutLarge payload or slow connectionData is still logged; consider batch settings
No data in sessionPackage version mismatchUpdate honeyhive package to latest
ImportErrorMissing dependenciesRun pip install "honeyhive[all]>=1.0.0rc0"
Rate limit exceededToo many requestsImplement retry logic with backoff
Payload too largeRequest exceeds 5MB limitTruncate large inputs/outputs
Context not propagatedThread/async context issuesSee Multithreading guide

SSL/Certificate Issues

Certificate Validation Failure

Symptom: SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] Solutions:
  1. Use system certificates:
import os
import ssl
import certifi

# Use certifi's certificate bundle
os.environ["SSL_CERT_FILE"] = certifi.where()

HoneyHiveTracer.init(...)
  1. Provide custom CA certificate:
import os

# If using corporate proxy with custom CA
os.environ["SSL_CERT_FILE"] = "/path/to/company-ca.pem"
os.environ["REQUESTS_CA_BUNDLE"] = "/path/to/company-ca.pem"

HoneyHiveTracer.init(...)
  1. Disable verification (development only):
# ⚠️ NEVER USE IN PRODUCTION
import ssl
ssl._create_default_https_context = ssl._create_unverified_context

Self-Signed Certificate

For on-premise deployments:
HoneyHiveTracer.init(
    api_key=api_key,
    project="my-project",
    api_endpoint="https://honeyhive.internal.company.com",  # or set HH_API_URL
    verify_ssl=True,
    ca_bundle="/etc/ssl/certs/company-ca.pem"
)

Proxy Configuration

HTTP/HTTPS Proxy

import os

# Set proxy environment variables
os.environ["HTTP_PROXY"] = "http://proxy.company.com:8080"
os.environ["HTTPS_PROXY"] = "http://proxy.company.com:8080"
os.environ["NO_PROXY"] = "localhost,127.0.0.1,.internal.company.com"

# Then initialize HoneyHive
HoneyHiveTracer.init(...)

Authenticated Proxy

# With username:password
os.environ["HTTPS_PROXY"] = "http://user:password@proxy.company.com:8080"

Proxy with Custom CA

os.environ["HTTPS_PROXY"] = "http://proxy.company.com:8080"
os.environ["REQUESTS_CA_BUNDLE"] = "/path/to/proxy-ca.pem"

Bypassing Proxy

# Bypass proxy for HoneyHive API
os.environ["NO_PROXY"] = "api.honeyhive.ai,.honeyhive.ai"

Timeout Handling

Connection Timeouts

Symptom: ConnectionError: Connection timed out Solutions:
  1. Increase timeout:
HoneyHiveTracer.init(
    api_key=api_key,
    project="my-project",
    timeout=60  # Increase from default 30s (or set HH_EXPORT_TIMEOUT=60)
)
  1. Use batched async export (default): By default (disable_batch=False), spans are exported asynchronously in a background thread. span.end() returns immediately and spans are sent in batches, so export latency does not block your application.
HoneyHiveTracer.init(
    api_key=api_key,
    project="my-project",
    # disable_batch=False is the default - async batched export
)
  1. Disable batching for serverless:
HoneyHiveTracer.init(
    api_key=api_key,
    project="my-project",
    disable_batch=True  # Synchronous inline export - use for Lambda/serverless
)

Read Timeouts

Symptom: ReadTimeout error but data appears in dashboard This is usually not a problem - data is being logged. The default batched async export handles this gracefully since exports happen in the background. If you are using disable_batch=True (synchronous mode), you can increase the timeout:
HoneyHiveTracer.init(
    api_key=api_key,
    project="my-project",
    timeout=120  # Longer timeout for slow networks
)

Retry on Failure

import os
from tenacity import retry, stop_after_attempt, wait_exponential

@retry(
    stop=stop_after_attempt(3),
    wait=wait_exponential(multiplier=1, min=4, max=10)
)
def init_with_retry():
    HoneyHiveTracer.init(
        api_key=os.environ["HONEYHIVE_API_KEY"],
        project="my-project"
    )
Requires: pip install tenacity

Debugging Specific Issues

Checklist:
  1. Verify API key is set: echo $HONEYHIVE_API_KEY
  2. Check project name matches exactly
  3. Enable verbose mode: verbose=True
  4. Check firewall/VPN allows outbound HTTPS to api.honeyhive.ai
  5. Verify SSL certificate is valid
Solutions:
  1. Remove TRACELOOP_API_KEY from environment if present
  2. Verify API key is correct (check for whitespace)
  3. Ensure key has access to the specified project
  4. Check key hasn’t expired
Solutions:
  1. Update honeyhive package: pip install -U "honeyhive>=1.0.0rc0"
  2. Check your provider package versions are up to date
  3. Verify traced functions are being called
  4. For async code, ensure proper context propagation
  5. Check that your provider package versions match the SDK requirements
Don’t worry - data is usually still logged. To reduce:
  1. Ensure disable_batch=False (default) so exports happen asynchronously in the background
  2. Increase timeout value if using disable_batch=True
Solutions:
  1. Call tracer.flush() at end of execution to drain the batch queue
  2. For Jupyter/serverless, always flush at end
  3. Reduce flush_interval for faster delivery (default is 5 seconds), or set the HH_FLUSH_INTERVAL env var
Solutions:
  1. Set SSL_CERT_FILE environment variable
  2. For corporate proxy, use company’s CA certificate
  3. Install certifi: pip install certifi
  4. Contact us for SSL .pem file if needed
Solutions:
  1. Install full package: pip install "honeyhive[all]>=1.0.0rc0"
  2. For specific integrations: pip install "honeyhive[openai]>=1.0.0rc0"
  3. Check Python version (3.11+ required)
Solutions:
  1. Implement retry with exponential backoff
  2. Reduce trace frequency with sampling
  3. Contact support for higher limits (Enterprise)
Solutions:
  1. Truncate large inputs/outputs before tracing
  2. Use references (URLs) for large files
  3. Don’t trace binary data directly
See Multithreading guide for proper context propagation patterns.
Cause: Using session_start() in a web server, or not creating sessions per request. session_start() stores the session ID on the tracer instance, so concurrent requests overwrite each other’s session.Solution: Use create_session() (sync) or acreate_session() (async), which store the session ID in request-scoped OpenTelemetry baggage:
@app.middleware("http")
async def session_middleware(request, call_next):
    await tracer.acreate_session(session_name=f"request-{request.url.path}")
    return await call_next(request)
See Tracer Initialization: Web Servers for full patterns.
Cause: A global HoneyHiveTracer.init() call conflicts with the per-datapoint tracers that evaluate() creates automatically.Solution: Remove the global tracer when using evaluate(). Don’t pass tracer= to @trace decorators on functions called by evaluate():
# Let evaluate() manage tracers
@trace(event_type="tool")  # No tracer parameter
def my_function(input):
    pass
See Tracer Initialization: Evaluation for details.

General Recommendations

Python

The default batched async export works for both serverless and notebooks, just call tracer.flush() before the execution context ends to drain any queued spans.
tracer = HoneyHiveTracer.init(
    api_key=api_key,
    project="my-project",
)

# ... your code ...

tracer.flush()  # Drain the batch queue before the process/cell ends
Large payloads work well with the default batched async export since the HTTP request happens in a background thread and doesn’t block your application. If you need to verify delivery, call tracer.flush() after the span completes.
See Multithreading guide for async context propagation.

Fallback Solution

If all else fails:
  1. Separate provider calls into dedicated functions
  2. Use @trace decorator on those functions
  3. This gives you manual control over what’s traced

Known Limitations

LimitationWorkaround
Colab notebooks not supportedUse manual instrumentation
Max request size: 5MBTruncate large payloads
Max nesting depth: 5 levelsFlatten deeply nested structures
Rate limit: 1000 req/minUse sampling; contact for Enterprise limits

Rate Limits

ResourceDefault LimitEnterprise
Requests per minute1,000Configurable
Max request size5 MB5 MB
Filter nesting depth5 levels5 levels
Enterprise-plan users can configure higher rate limits.

Still Need Help?

Discord Community

Get help from the community

Email Support

Contact our support team