Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.honeyhive.ai/llms.txt

Use this file to discover all available pages before exploring further.

Strands Agents is an open-source SDK from AWS for building AI agents. It provides a simple, code-first approach to creating agents with tools, multi-agent orchestration, and streaming support. HoneyHive integrates seamlessly with Strands—no instrumentor needed. Strands uses OpenTelemetry natively, so initializing HoneyHive first automatically captures all agent activity.

Quick Start

Zero configuration required. Just initialize HoneyHive before importing Strands and all agent runs, tool calls, and model calls are automatically traced.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive[aws-strands]"

# Or install separately
pip install honeyhive strands-agents
import os
from honeyhive import HoneyHiveTracer
from strands import Agent

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

# Your Strands code works unchanged - traces appear automatically
agent = Agent(model=your_model)
result = agent("Hello!")

Tested Versions

HoneyHive’s Strands integration is tested against the following versions, as of April 2026. Newer patch releases are generally safe; pin to these versions to reproduce a known-good configuration.
PackageVersion
strands-agents1.37.0 (minimum: >= 1.19.0)
boto31.42.97 (minimum: >= 1.26.0, for AWS Bedrock model backend)
Requires Python 3.11+.
Don’t use openinference-instrumentation-strands-agents. That SpanProcessor (from Arize/OpenInference) isn’t supported by HoneyHive’s ingestion pipeline today and its spans may be dropped or misrouted. Strands’ native OTel path shown above is the recommended and fully supported approach.

What Gets Traced

HoneyHive automatically captures:
  • Agent invocations - Every agent() call with inputs and outputs
  • LLM calls - Model requests, responses, and token usage
  • Tool executions - Each tool call with arguments and results
  • Event loop cycles - Internal agent reasoning steps
No manual instrumentation required.

Example: Agent with Tools

import os
from honeyhive import HoneyHiveTracer
from strands import Agent, tool
from strands.models.anthropic import AnthropicModel

# Initialize HoneyHive first
tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression."""
    return str(eval(expression))

agent = Agent(
    model=AnthropicModel(model_id="claude-sonnet-4-20250514", max_tokens=1024),
    tools=[calculator],
    system_prompt="You are a helpful assistant.",
)

result = agent("What is 25 * 4?")
print(result)

Example: AWS Bedrock Model Backend

Strands works with AWS Bedrock via BedrockModel. You’ll need AWS credentials configured via any standard mechanism — ~/.aws/credentials, an IAM role, or environment variables:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-west-2"
pip install "honeyhive[aws-strands]"
import os
from honeyhive import HoneyHiveTracer
from strands import Agent, tool
from strands.models import BedrockModel

# Initialize HoneyHive first
tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

@tool
def calculator(operation: str, a: float, b: float) -> float:
    """Perform basic math operations."""
    if operation == "add":
        return a + b
    elif operation == "multiply":
        return a * b
    return 0

# Use a cross-region inference profile for on-demand throughput
model = BedrockModel(model_id="us.anthropic.claude-haiku-4-5-20251001-v1:0")
agent = Agent(model=model, tools=[calculator])

result = agent("What is 5 + 3?")
print(result)
Use a cross-region inference profile (the us. or global. prefix on the model ID) rather than a bare model ID. Bedrock requires the prefixed form for on-demand requests to Claude models — without it, Strands will fail with a model-access error. Example: us.anthropic.claude-haiku-4-5-20251001-v1:0.

Example: Multi-Agent (Agents-as-Tools)

Strands supports wrapping agents as tools for orchestration patterns:
import os
from honeyhive import HoneyHiveTracer
from strands import Agent, tool
from strands.models.anthropic import AnthropicModel

tracer = HoneyHiveTracer.init(
    api_key=os.getenv("HH_API_KEY"),
    project=os.getenv("HH_PROJECT"),
)

def get_model():
    return AnthropicModel(model_id="claude-sonnet-4-20250514", max_tokens=1024)

# Specialist agents wrapped as tools
@tool
def research_agent(query: str) -> str:
    """Research factual information."""
    agent = Agent(model=get_model(), system_prompt="You are a research specialist.")
    return str(agent(query))

@tool
def math_agent(problem: str) -> str:
    """Solve math problems step-by-step."""
    agent = Agent(model=get_model(), system_prompt="You are a math specialist.")
    return str(agent(problem))

# Orchestrator routes to specialists
orchestrator = Agent(
    model=get_model(),
    tools=[research_agent, math_agent],
    system_prompt="""Route queries to the appropriate specialist:
- Factual questions → research_agent
- Math problems → math_agent""",
)

result = orchestrator("What is the square root of 144?")
print(result)
In HoneyHive, you’ll see the full trace hierarchy: orchestrator → specialist agent → LLM calls.

Adding Metadata

Session-Level (User Context)

Use enrich_session for metadata that applies to the entire session:
tracer = HoneyHiveTracer.init(project="your-project")

tracer.enrich_session(
    user_properties={
        "user_id": "user_123",
        "plan": "enterprise",
    }
)

Span-Level (Agent Attributes)

Use Strands’ native trace_attributes for metadata on agent spans:
agent = Agent(
    model=model,
    tools=[...],
    trace_attributes={
        "environment": "production",
        "feature": "customer_support",
        "app_version": "2.1.0",
    },
)
These attributes appear on all spans created by that agent (invocations, LLM calls, tool executions).

Troubleshooting

Traces not appearing

  1. Initialize HoneyHive first - Must be called before importing Strands:
# ✅ Correct order
from honeyhive import HoneyHiveTracer
tracer = HoneyHiveTracer.init(project="your-project")

from strands import Agent  # Import after HoneyHive init

# ❌ Wrong - HoneyHive initialized after Strands
from strands import Agent
from honeyhive import HoneyHiveTracer
tracer = HoneyHiveTracer.init(project="your-project")
  1. Check environment variables - Ensure HH_API_KEY and HH_PROJECT are set
  2. Verify model credentials - Ensure your model provider credentials are configured (Anthropic API key, AWS credentials, etc.)

Enrich Your Traces

Add user IDs and custom metadata to Strands traces

Custom Spans

Create spans for business logic around agent calls

Distributed Tracing

Trace agents across service boundaries

Query Trace Data

Export traces programmatically

Resources