Introduction

HoneyHive’s tracing functionality includes support for tracking configurations, prompt templates, and other LLM configs in your traces.

Prerequisites

You have already set tracing for your code as described in our quickstart guide.

Setting

You can set on both the trace level or the span level. If the applies to the entire trace, then set it on the trace level. If the applies to a specific span, then set it on the span level. For more details, refer to the enrich traces documentation.

HoneyHive automatically captures most model providers. Only use this function when you want to capture additional configs that are not automatically captured. You can find the full list of supported packages here.

Concepts

Analyzing configurations in traces

By including configurations in your traces, you can:

  • Track how different prompt structures affect your model’s output.
  • Analyze the impact of specific placeholder values on performance.
  • Compare prompts across different runs or sessions.
  • Identify patterns in successful or unsuccessful prompts.
  • Get more insights into how different models perform under the same conditions.

By incorporating configurations into your HoneyHive traces, you can gain deeper insights into how your prompts are constructed and how they perform, enabling more effective prompt engineering and optimization.

Learn More

SDK Reference

Read more about the enrich_span function in the Python SDK reference.