Skip to main content
This guide shows you how to attach configuration details (prompt version, model parameters, hyperparameters) to your traces using the config namespace.
Config vs Prompt Management: The config namespace logs which configuration was used for a given trace. To create and deploy prompt templates, see Managing Prompts.

Quick Start

Use enrich_session() to set the config for the entire trace, or enrich_span() to set it on a specific operation.

On the Session

Set configuration context that applies to the entire user interaction:
from honeyhive import HoneyHiveTracer
import os

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

# ... your application logic ...

tracer.enrich_session(config={
    "model": "gpt-4o-mini",
    "prompt_version": "v2.3",
    "temperature": 0.7,
    "max_tokens": 1024,
})

On a Span

Attach config to a specific function or LLM call:
from honeyhive import HoneyHiveTracer, trace
import os

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

@trace
def generate_summary(text: str):
    # ... your function logic ...

    tracer.enrich_span(config={
        "prompt_name": "summarizer-v3",
        "model": "gpt-4o",
        "temperature": 0.3,
        "system_prompt": "You are a concise summarizer.",
    })

    return response

Logging a Deployed Prompt

If you use HoneyHive’s Prompt Management to deploy prompts, you can log the fetched configuration on the trace so you know exactly which version produced each response.
import os
from openai import OpenAI
from honeyhive import HoneyHive, HoneyHiveTracer, enrich_span, trace

hh = HoneyHive(api_key=os.environ["HH_API_KEY"])
tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)
openai_client = OpenAI()

# Fetch the deployed prompt
configs = hh.configurations.list()
prompt = next(
    (c.model_dump() for c in configs
     if c.model_dump().get("name") == "my-prompt"
     and "prod" in c.model_dump().get("env", [])),
    None,
)

@trace
def chat(user_message: str):
    if not prompt:
        raise ValueError("Prompt not found")

    params = prompt["parameters"]

    # Log which config produced this response
    enrich_span(config={
        "prompt_name": prompt["name"],
        "prompt_id": prompt.get("id"),
        "model": params["model"],
        "temperature": params.get("hyperparameters", {}).get("temperature"),
    })

    messages = params["template"] + [{"role": "user", "content": user_message}]
    response = openai_client.chat.completions.create(
        model=params["model"],
        messages=messages,
        **params.get("hyperparameters", {}),
    )
    return response.choices[0].message.content

chat("Explain quantum computing simply.")

Concepts

What Belongs in Config?

The config namespace is for any setting that controls how your application generates a response. This makes it easy to filter and compare traces by configuration in the dashboard.
CategoryExample keys
Model selectionmodel, provider, fallback_model
Prompt versioningprompt_name, prompt_version, prompt_id
Hyperparameterstemperature, max_tokens, top_p, frequency_penalty
System behaviorsystem_prompt, tool_choice, response_format
Routingab_variant, rollout_percentage, feature_flag

Config vs Metadata

Both store key-value data. Use the right namespace so you can filter effectively in the dashboard.
NamespaceUse forExample
configSettings that control generationmodel, temperature, prompt_version
metadataContext about the requestrequest_id, endpoint, environment

Data Types

TypeExample
String"model": "gpt-4o"
Number"temperature": 0.7
Boolean"stream": true
Object"hyperparameters": {"top_p": 0.9}

Learn More

SDK Reference