Skip to main content
This document lists the common issues users run into when using our tracer. Please refer to the proposed solutions and see if they resolve your problem. If the issue still persists or isn’t mentioned here, please reach out on our Discord support channel.

Debugging specific issues

For detailed SDK logs, we recommend setting the argument verbose=True on the Python tracer initialization to see the error trace.
Validate that the project name and API key are being set correctlyFinally check your VPN settings and whitelist our SSL certificate
Please remove TRACELOOP_API_KEY from your environment if present
  • Upgrade your honeyhive package version to the latest and re-run your code
Run pip install -U honeyhive / npm update honeyhive
  • Validate the provider package version that you are running against our list here and update your provider package to match to our latest tested version
Run pip freeze | grep <package-name> or npm list <package-name> to get the version you are running on your machineThen refer to the below table to see if your package version is too far ahead of our latest tested version
  • In JavaScript, please update your node version to a later minor version.
Don’t worry data is still getting logged. OTEL is timing out the response from our ingestion endpoint. We are working on fixing this issue.
Set disable_batch=True on the Python tracer to allow the data to be sent earlier
Ensure that the SSL_CERT_FILE environment variable is set correctly.
  • Request the SSL .pem file from us.
  • Save the file to a location accessible in your code.
  • Set the SSL_CERT_FILE environment variable to point to the file’s location.

General recommendations for Python

  • Add HoneyHiveTracer.flush() at the end of your application code
  • Set disable_batch=True to ensure the data is being sent as the code executes
  • Set disable_batch=True since sending a large batch might cause timeout issues
A final fallback solution for both Python and JavaScript if all else fails is to

Known issues

  1. Colab notebooks aren’t supported by our Python auto-tracer
  2. ES Module projects aren’t supported by our JavaScript auto-instrumentation
ES Module projects are supported by our custom spans and those projects using LangChain are supported by our LangChain callback handler

Tracing Rate Limits

We support up to 5MB on individual requests. Our filters and aggregates are supported up to 5 levels of nesting. We have a default rate limit of 1000 requests per minute.
Enterprise-plan users can set higher rate limits.
Our online evaluators can be scoped by event_type and event_name.
We can support more granular filters for evaluators if needed.

Latest Package Versions tested

As of 09/18/2024 The below tables list the latest version of a provider’s package that we have tested our tracers against. It’s often the case that future versions after these are also supported. As long as the providers haven’t changed the contract on the specific inference functions that are being auto-traced, the tracer will continue to work.

Python packages tested

PackageVersion Tested for Tracing
langchain0.2.5
llama-index0.10.59
openai1.31.1
aleph_alpha_client7.1.0
boto31.34.120
chromadb0.5.0
cohere5.3.3
google-generativeai0.6.0
groq0.10.0
anthropic0.25.2
mistralai0.2.0
ollama0.2.0
pinecone-client5.0.0
qdrant-client1.9.1
replicate0.23.1
together1.2.0
weaviate-client3.26.0
haystack-ai2.0.0
marqo3.5.1
milvus2.4.1
ibm-watson-machine-learning1.0.333

Javascript packages tested

PackageVersion
langchain0.2.12
llamaindex0.1.16
@anthropic-ai/sdk0.27.1
@azure/openai1.0.0-beta.10
@aws-sdk/client-bedrock-runtime3.499.0
chromadb1.8.1
cohere-ai7.7.5
openai4.57.0
ollama0.2.0
@pinecone-database/pinecone2.0.1
@qdrant/js-client-rest1.9.0
@google-cloud/vertexai1.2.0
@google-cloud/aiplatform3.10.0
I