LLM applications often involve both backend and frontend services to deliver the best user experience.

With distributed tracing, we are able to stitch together traces from across multiple services to see what happens when a user interacts with our application.

What your Cloud Architecture might look like

How it works

When a trace is captured in a service, a unique Session Id is associated with the trace. This allows HoneyHive to correlate traces across multiple services.

The Session Id must be passed from service to service to maintain a consistent Session Id across all services.

Implementing distributed tracing

For this tutorial, we are assuming you have already instrumented one of your services with our tracer and now want to correlate that trace with another.

Prerequisites

You have already set tracing for your code as described here.

For serverless environments such as AWS Lambda, please ensure that you are using the x86_64 runtime architecture in your lambda. You can install the x86_64 version of honeyhive using:

Expected Time: 5 minutes

Steps

1

Get the session id from starting runtime

All our tracers expose a session_id/sessionId property that you can use to get the session id of the trace.

For TypeScript, you will need to pass the tracer object to the traced function to fetch the session id.
2

Send the session id to the other service

Pass the session id as one of the response headers or body properties to the other service.

3

Instantiate the tracer with the session id

In the other service, instantiate the tracer with the session id you received from the original service.

For serverless environments such as AWS Lambda, you must set disable_batch to True in the init and init_from_session_id functions. Also, ensure that you are using the x86_64 runtime architecture in your lambda.

Conclusion

You have successfully correlated traces across multiple services. You can now see the full trace in HoneyHive.

Learn more