Debugging specific issues
For detailed SDK logs, we recommend setting the argumentverbose=True
on the Python tracer initialization to see the error trace.
I don’t see any tracer initialization message and no data is being logged.
I don’t see any tracer initialization message and no data is being logged.
Validate that the project name and API key are being set correctlyFinally check your VPN settings and whitelist our SSL certificate
I am getting a `403` status code error.
I am getting a `403` status code error.
Please remove
TRACELOOP_API_KEY
from your environment if presentI can see a session but there’s no data inside it.
I can see a session but there’s no data inside it.
- Upgrade your
honeyhive
package version to the latest and re-run your code
pip install -U honeyhive
/ npm update honeyhive
- Validate the provider package version that you are running against our list here and update your provider package to match to our latest tested version
pip freeze | grep <package-name>
or npm list <package-name>
to get the version you are running on your machineThen refer to the below table to see if your package version is too far ahead of our latest tested version- In JavaScript, please update your node version to a later minor version.
I am seeing a `Read timeout` error.
I am seeing a `Read timeout` error.
Don’t worry data is still getting logged. OTEL is timing out the response from our ingestion endpoint. We are working on fixing this issue.
I can see the data being logged, but it is taking a long time to show up.
I can see the data being logged, but it is taking a long time to show up.
Set
disable_batch=True
on the Python tracer to allow the data to be sent earlierI am encountering an SSL validation failure.
I am encountering an SSL validation failure.
Ensure that the SSL_CERT_FILE environment variable is set correctly.
- Request the SSL .pem file from us.
- Save the file to a location accessible in your code.
- Set the SSL_CERT_FILE environment variable to point to the file’s location.
General recommendations for Python
Running code in a serverless environment or Jupyter notebook
Running code in a serverless environment or Jupyter notebook
- Add
HoneyHiveTracer.flush()
at the end of your application code - Set
disable_batch=True
to ensure the data is being sent as the code executes
Handling large amounts of data (greater than 100k tokens)
Handling large amounts of data (greater than 100k tokens)
- Set
disable_batch=True
since sending a large batch might cause timeout issues
Dealing with async execution steps in your code
Dealing with async execution steps in your code
- Refer to our multi-threading docs on Python to figure out how to propagate context correctly
- Separate your provider call into a separate function
- Manually instrument that function by adding the
trace
decorator on it ortraceFunction
for JS/TS.
Known issues
- Colab notebooks aren’t supported by our Python auto-tracer
- ES Module projects aren’t supported by our JavaScript auto-instrumentation
ES Module projects are supported by our custom spans and those projects using LangChain are supported by our LangChain callback handler
Tracing Rate Limits
We support up to5MB
on individual requests.
Our filters and aggregates are supported up to 5
levels of nesting.
We have a default rate limit of 1000
requests per minute.
Enterprise-plan users can set higher rate limits.
event_type
and event_name
.
We can support more granular filters for evaluators if needed.
Latest Package Versions tested
As of09/18/2024
The below tables list the latest version of a provider’s package that we have tested our tracers against. It’s often the case that future versions after these are also supported. As long as the providers haven’t changed the contract on the specific inference functions that are being auto-traced, the tracer will continue to work.
Python packages tested
Package | Version Tested for Tracing |
---|---|
langchain | 0.2.5 |
llama-index | 0.10.59 |
openai | 1.31.1 |
aleph_alpha_client | 7.1.0 |
boto3 | 1.34.120 |
chromadb | 0.5.0 |
cohere | 5.3.3 |
google-generativeai | 0.6.0 |
groq | 0.10.0 |
anthropic | 0.25.2 |
mistralai | 0.2.0 |
ollama | 0.2.0 |
pinecone-client | 5.0.0 |
qdrant-client | 1.9.1 |
replicate | 0.23.1 |
together | 1.2.0 |
weaviate-client | 3.26.0 |
haystack-ai | 2.0.0 |
marqo | 3.5.1 |
milvus | 2.4.1 |
ibm-watson-machine-learning | 1.0.333 |
Javascript packages tested
Package | Version |
---|---|
langchain | 0.2.12 |
llamaindex | 0.1.16 |
@anthropic-ai/sdk | 0.27.1 |
@azure/openai | 1.0.0-beta.10 |
@aws-sdk/client-bedrock-runtime | 3.499.0 |
chromadb | 1.8.1 |
cohere-ai | 7.7.5 |
openai | 4.57.0 |
ollama | 0.2.0 |
@pinecone-database/pinecone | 2.0.1 |
@qdrant/js-client-rest | 1.9.0 |
@google-cloud/vertexai | 1.2.0 |
@google-cloud/aiplatform | 3.10.0 |