This method is designed for customers who:

  • Want more fine-grained control over input/output features that are logged
  • Are using a different run time language than Python or TypeScript
  • Already have tracing setup and don’t want to use our tracers
  • Have package conflicts with our SDKs

You can use our APIs directly to log your application data to HoneyHive.

Our logging APIs have been simplified to have minimal required properties and self-explanatory field names to make it easier for you to use the APIs.

If you have OpenTelemetry or OpenTracing configured for your application, contact us to get the OpenTelemetry exporter for HoneyHive.
We highly recommend Python and JS/TS users to use our custom tracers. Event nesting is immensely easier via that approach.

Prerequisites

All of the following strategies assume:

Logging Strategies

For different application types and needs, we have relevant logging strategies.

We have created specialized APIs to simplify LLM data ingestion. In the case of open-ended calls to external tools, we have a more generic event logging API.

The ideal roadmap for logging your data to HoneyHive is:

  1. Sync LLM ingestion
  2. Sync LLM + Tool ingestion
  3. Async LLM + Tool ingestion
  4. Async LLM + Tool batching

LLM Data Ingestion

If you’d like to track solely the LLM invocations, we provide two ways to manually ingest the logs depending on your run time requirements.

Sync

We normally recommend starting with a synchronous ingestion strategy to logging because it is the easiest to setup.

Once your application traffic starts to scale, it’s recommended to switch to an asynchronous ingestion strategy.

  1. POST /session/start

You start the HoneyHive session when your application execution begins.

The API reference for POST /session/start describes the properties you can track for a session.

  1. POST /events/model

At the end of every LLM call, log the model data to HoneyHive.

The API reference for POST /events/model captures all the relevant inputs, outputs, tokens, duration data you’ll need.

Async

In an asynchronous ingestion strategy, you wait till the user interaction is completed before logging the data to HoneyHive.

You can either send each session’s data right after it completes or collect a larger batch (100-1000) of sessions and flush them regularly.

If you already log the session data to a database somewhere, you can use the async batch strategy to import that data into HoneyHive.

The model events batch endpoint automatically separates each LLM call into its own session.
  1. POST /events/model/batch

The API reference for POST /events/model/batch accepts an array of model events and logs them in a single API call.

In the case you’d like to explicitly group LLM calls together into their own sessions, you can follow the instructions below for async batching for tool calling ingestion.

External Tool Data Ingestion

If you’d like to track external tool calls (like vector DBs, function calls, etc) along side the LLM invocations, we follow a similar idea as the above LLM ingestion distinction.

We highly recommend reading our Data Model and Instrumentation Guide to understand the data you need to log and how to structure it.

Sync

The synchronous ingestion strategy for tool calls is similar to the LLM ingestion strategy.

  1. POST /session/start

You start a session tracking the relevant session properties you need.

Keep the session_id that’s returned to link the future events to the same session.

  1. POST /events

After each LLM or tool invocation, you call our POST /events endpoint with the relevant event data (as described in our instrumentation guide).

This strategy is recommended for low traffic applications.

Async

The asynchronous ingestion strategy for tool calls is again similar to the LLM ingestion strategy with the key difference of using our POST /events instead of POST /events/model.

You can either send each session’s data right after it completes or collect a larger batch (100-1000) of sessions and flush them regularly.

  1. POST /events/batch

The API reference for POST /events/batch accepts an array of events and logs them in a single API call.

The endpoint accepts a boolean property is_single_session.

  1. If set to true, the events in the batch will be grouped into a single session.
  2. If set to false, HoneyHive only refers to the session_id on the event to decide which session the event belongs to.

The default value is set to false so each event becomes its own session (or grouped into the session according to its session_id).

If you want to group events into chain events, refer to the chain events section on our Data Model page.

Conclusion

We have seen customers use the above strategies to log their data to HoneyHive.

We have logger files which implement the above strategies in Go and Java.

If you have any questions or need help, please reach out to us. We are happy to help you get started with logging your data to HoneyHive.