Skip to main content
If you’re already instrumented with OpenTelemetry, OpenInference, or Traceloop and want to know where your attributes land in HoneyHive’s schema, this is the page to consult. Use it to look up which bucket.key to query and to understand how HoneyHive normalizes across ecosystems.
OTel GenAI columns are pinned to semantic conventions v1.36 (stable release). Attributes introduced in v1.37 are noted inline.

How to read this matrix

Each section covers one concept group. Columns are:
ColumnInstrumentation source
OTel GenAIOfficial OpenTelemetry GenAI semantic conventions (v1.36+)
OpenInferenceArize AI OpenInference instrumentation (openinference-* packages)
TraceloopTraceloop OpenLLMetry (opentelemetry-instrumentation-* packages)
HoneyHive canonicalDestination key in HoneyHive schema (bucket.key notation; see Semantic Convention Reference for the full seven-bucket schema)
A - in a source column means no standard attribute exists in that ecosystem for this concept. Where HoneyHive canonical also shows -, the attribute is a confirmed gap and must be set manually.

Span / Operation Kinds

Each ecosystem classifies spans by type (LLM call, tool call, agent step, chain, etc.) using a different attribute:
EcosystemSpan kind attributeCommon values
OTel GenAI v1.37+gen_ai.agent.typellm, tool, agent
OpenInferenceopeninference.span.kindLLM, TOOL, AGENT, CHAIN, RETRIEVER, EMBEDDING
Tracelooptraceloop.span.kindllm, tool, agent, task, workflow
HoneyHive canonicalmetadata.span_kindnormalized from whichever is present
Why this matters for attribute routing: Tool attributes (config.tool_name, config.tool_description, metadata.tool_call_id) are surfaced in HoneyHive’s Tool section only on spans classified as tool calls. If these attributes are missing in the UI, check that your instrumentation sets the span kind correctly. Pure OTel v1.36 tool spans: OTel v1.36 has no gen_ai.agent.type attribute. If you’re on pure OTel v1.36, gen_ai.tool.* attributes are still captured and mapped to their canonical destinations (config.tool_name, etc.), they just won’t have a metadata.span_kind set, since there is no source attribute to derive it from. The gen_ai.agent.type attribute arriving in v1.37 will give OTel parity with OpenInference and Traceloop for span kind classification.

Token Counts

AttributeOTel GenAIOpenInferenceTraceloopHoneyHive canonical
Input / prompt tokensgen_ai.usage.input_tokens (v1.36+, preferred)
gen_ai.usage.prompt_tokens (legacy alias)
llm.token_count.prompt
gen_ai.usage.prompt_tokens
llm.token_count.prompt
gen_ai.usage.prompt_tokens
metadata.input_tokens
metadata.prompt_tokens
Output / completion tokensgen_ai.usage.output_tokens (v1.36+, preferred)
gen_ai.usage.completion_tokens (legacy alias)
llm.token_count.completion
gen_ai.usage.completion_tokens
llm.token_count.completion
gen_ai.usage.completion_tokens
metadata.output_tokens
metadata.completion_tokens
Total tokens- (derived by normalizer)llm.token_count.total
gen_ai.usage.total_tokens
llm.token_count.total
llm.usage.total_tokens
metadata.total_tokens (auto-computed if absent)
Cache read tokensgen_ai.usage.cache_read_input_tokensgen_ai.usage.cache_read_input_tokensgen_ai.usage.cache_read_input_tokensmetadata.cache_read_input_tokens
Cache write tokensgen_ai.usage.cache_write_input_tokens-gen_ai.usage.cache_write_input_tokensmetadata.cache_write_input_tokens
Reasoning tokensgen_ai.usage.reasoning_tokensgen_ai.usage.reasoning_tokensgen_ai.usage.reasoning_tokensmetadata.reasoning_tokens
metadata.total_tokens is auto-computed by the normalizer from prompt_tokens + completion_tokens (or input_tokens + output_tokens) when not explicitly set. It is the source for the Total Tokens stat in Session Summary.Traceloop uses llm.usage.total_tokens. The llm.usage prefix (instead of gen_ai.usage) is intentional in Traceloop’s instrumentation, not a typo.

Model Identity

AttributeOTel GenAIOpenInferenceTraceloopHoneyHive canonical
Request modelgen_ai.request.modelgen_ai.request.modelgen_ai.request.model
llm.model_name
config.model
Response modelgen_ai.response.modelgen_ai.response.modelgen_ai.response.modelmetadata.response_model
Resolved model name-llm.model_namellm.model_namemetadata.model_name
metadata.llm.model_name (legacy dotted key)
Provider / systemgen_ai.system
gen_ai.provider.name (v1.37+)
gen_ai.system
llm.provider
gen_ai.system
llm.provider
metadata.system
config.provider
Response IDgen_ai.response.idgen_ai.response.idgen_ai.response.idmetadata.response_id
metadata.model_name is the resolved model name shown in the HoneyHive UI. The normalizer sets it from config.model (request model) as a fallback when no response model is available. Use metadata.response_model for the exact model returned by the API.Provider / system mapping: gen_ai.system (all three ecosystems) maps to metadata.system. gen_ai.provider.name (OTel v1.37+) and llm.provider (OpenInference/Traceloop) map to config.provider.

Finish Reason

AttributeOTel GenAIOpenInferenceTraceloopHoneyHive canonical
Finish reason (scalar)First element of gen_ai.response.finish_reasons (extracted by normalizer)gen_ai.response.finish_reasonFirst element of gen_ai.response.finish_reasons (extracted by normalizer)metadata.finish_reason
Finish reasons (array)gen_ai.response.finish_reasonsgen_ai.response.finish_reasonsgen_ai.response.finish_reasonsmetadata.finish_reasons
metadata.response_finish_reasons (OpenInference alias)
OTel GenAI defines gen_ai.response.finish_reasons as a string array. HoneyHive extracts the first element into metadata.finish_reason for single-choice filtering and also stores the full array in metadata.finish_reasons. OpenInference’s singular gen_ai.response.finish_reason maps directly to metadata.finish_reason.

Agent Context

AttributeOTel GenAIOpenInferenceTraceloopHoneyHive canonical
Agent namegen_ai.agent.namegen_ai.agent.namegen_ai.agent.namemetadata.agent_name
Agent descriptiongen_ai.agent.descriptiongen_ai.agent.descriptiongen_ai.agent.descriptionmetadata.agent_description
Agent IDgen_ai.agent.idgen_ai.agent.idgen_ai.agent.idmetadata.agent_id
Span / operation kind- (gen_ai.agent.type in v1.37+)openinference.span.kind (LLM, AGENT, TOOL, CHAIN, …)traceloop.span.kindmetadata.span_kind
Agent handoff context- (gap - no cross-ecosystem standard)- (gap)- (gap)- (gap - set manually via metadata)
Gap: agent handoff context. None of the three ecosystems has a stable standard attribute for passing context between agents in a handoff (e.g., handoff reason, receiving agent name, continuation pointer). If you need this in HoneyHive, set it manually as arbitrary metadata sub-keys. A naming convention such as metadata.handoff_reason and metadata.handoff_target_agent is recommended for consistency.

Tool Linking

AttributeOTel GenAIOpenInferenceTraceloopHoneyHive canonical
Tool namegen_ai.tool.namegen_ai.tool.namegen_ai.tool.nameconfig.tool_name
Tool descriptiongen_ai.tool.descriptiongen_ai.tool.descriptiongen_ai.tool.descriptionconfig.tool_description
Tool call IDgen_ai.tool.call.idgen_ai.tool.call.idgen_ai.tool.call.idmetadata.tool_call_id
Tool statusgen_ai.tool.statusgen_ai.tool.statusgen_ai.tool.statusmetadata.tool_status
Tool call argument mapping (gen_ai.tool.call.arguments vs OpenInference llm.tools.*) is intentionally out of scope. The attribute structures are complex and framework-specific. See your framework’s instrumentation docs for argument schema details: OTel GenAI tool span spec, OpenInference semantic conventions, Traceloop OpenLLMetry instrumentation.

Session / Conversation Context

AttributeOTel GenAIOpenInferenceTraceloopHoneyHive canonical
Conversation / Session IDgen_ai.conversation.idsession.id- (no standard)metadata.conversation_id
root session_id (when set via honeyhive.session_id)
User ID- (no standard)user.id- (no standard)metadata.user_id

Gaps Summary

The following concepts have no stable attribute in any of the three instrumentation ecosystems:
ConceptRecommended workaround
Agent handoff contextSet manually: metadata.handoff_reason, metadata.handoff_target_agent