BYOI Advantage: HoneyHive has zero dependencies on boto3 or AWS SDKs. Use any version you need—trace Claude, Llama, Titan, and more without SDK conflicts.
Compatibility
Python Version Support
| Support Level | Python Versions |
|---|
| Fully Supported | 3.11, 3.12, 3.13 |
| Not Supported | 3.10 and below |
AWS SDK Requirements
- Minimum: boto3 >= 1.26.0
- Recommended: boto3 >= 1.28.0
- Tested Versions: 1.28.0, 1.29.0, 1.30.0
Supported Models
| Provider | Models | Status |
|---|
| Anthropic | Claude 3.5, Claude 3 (Sonnet, Haiku, Opus) | Fully Supported |
| Amazon | Titan Text, Titan Embeddings | Fully Supported |
| Meta | Llama 2, Llama 3 | Fully Supported |
| Mistral | Mistral, Mixtral | Supported |
| Cohere | Command, Embed | Supported |
| AI21 | Jurassic | Supported |
Known Limitations
- Cross-Region: Requires proper AWS credentials and region configuration
- Embedding Models: Traced but may require manual metadata enrichment
- Streaming: Supported with both instrumentors
Choose Your Instrumentor
| Instrumentor | Status | Best For | Install |
|---|
| OpenInference | Fully Supported | All Bedrock models | pip install "honeyhive[openinference-bedrock]>=1.0.0rc0" |
| Traceloop | Partial Support | Basic tracing | pip install "honeyhive[traceloop-bedrock]>=1.0.0rc0" |
Quick Start with OpenInference
Installation
# Recommended: Install with Bedrock integration
pip install "honeyhive[openinference-bedrock]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" openinference-instrumentation-bedrock boto3>=1.26.0
Basic Setup
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.bedrock import BedrockInstrumentor
import boto3
import json
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(
project="your-project" # Or set HH_PROJECT environment variable
) # Uses HH_API_KEY from environment
# Step 2: Initialize instrumentor with tracer_provider
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Create Bedrock client
bedrock = boto3.client(
"bedrock-runtime",
region_name="us-east-1"
)
# Now all Bedrock calls are automatically traced!
response = bedrock.invoke_model(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1000,
"messages": [{"role": "user", "content": "Hello from Bedrock!"}]
})
)
# ✨ Automatically traced!
Order matters! The tracer must be initialized BEFORE calling instrumentor.instrument().
Quick Start with Traceloop
Installation
# Recommended: Install with Traceloop Bedrock integration
pip install "honeyhive[traceloop-bedrock]>=1.0.0rc0"
# Alternative: Manual installation
pip install "honeyhive>=1.0.0rc0" opentelemetry-instrumentation-bedrock boto3>=1.26.0
Basic Setup
from honeyhive import HoneyHiveTracer
from opentelemetry.instrumentation.bedrock import BedrockInstrumentor
import boto3
import json
# Step 1: Initialize HoneyHive tracer first
tracer = HoneyHiveTracer.init(project="your-project")
# Step 2: Initialize Traceloop instrumentor
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Create Bedrock client and use normally
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.invoke_model(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1000,
"messages": [{"role": "user", "content": "Hello!"}]
})
)
Traceloop support for Bedrock is partial. Some Bedrock-specific features may require the OpenInference instrumentor.
Instrumentor Comparison
| Feature | OpenInference | Traceloop |
|---|
| Status | Fully Supported | Partial |
| Claude Models | ✅ Full | ✅ Basic |
| Titan Models | ✅ Full | ✅ Basic |
| Llama Models | ✅ Full | ✅ Basic |
| Token Tracking | Detailed | Basic |
| Performance | Lightweight | Smart batching |
Example: Claude on Bedrock
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.bedrock import BedrockInstrumentor
import boto3
import json
# Initialize tracer and instrumentor
tracer = HoneyHiveTracer.init(
api_key="your-honeyhive-key",
project="bedrock-claude-demo",
source="production"
)
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
@trace
def chat_with_claude(message: str) -> str:
"""Chat with Claude 3 via Bedrock."""
enrich_span({
"provider": "bedrock",
"model_family": "claude"
})
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.invoke_model(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1000,
"messages": [{"role": "user", "content": message}]
})
)
result = json.loads(response["body"].read())
return result["content"][0]["text"]
# All calls traced automatically
response = chat_with_claude("What is AWS Bedrock?")
Example: Amazon Titan
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.bedrock import BedrockInstrumentor
import boto3
import json
tracer = HoneyHiveTracer.init(project="bedrock-titan-demo")
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
@trace
def generate_with_titan(prompt: str) -> str:
"""Generate text using Amazon Titan."""
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.invoke_model(
modelId="amazon.titan-text-express-v1",
body=json.dumps({
"inputText": prompt,
"textGenerationConfig": {
"maxTokenCount": 1000,
"temperature": 0.7,
"topP": 0.9
}
})
)
result = json.loads(response["body"].read())
return result["results"][0]["outputText"]
result = generate_with_titan("Explain cloud computing in simple terms.")
Example: Multi-Turn Conversation with Converse API
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.bedrock import BedrockInstrumentor
import boto3
tracer = HoneyHiveTracer.init(project="bedrock-converse-demo")
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
@trace
def multi_turn_conversation():
"""Multi-turn conversation using Bedrock Converse API."""
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
conversation = []
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
# First turn
conversation.append({
"role": "user",
"content": [{"text": "What are three benefits of cloud computing?"}]
})
response = bedrock.converse(
modelId=model_id,
messages=conversation
)
assistant_message = response["output"]["message"]
conversation.append(assistant_message)
enrich_span({"turns": 1, "model": "claude-3-sonnet"})
# Second turn
conversation.append({
"role": "user",
"content": [{"text": "Can you elaborate on scalability?"}]
})
response = bedrock.converse(
modelId=model_id,
messages=conversation
)
enrich_span({"turns": 2, "status": "complete"})
return response["output"]["message"]["content"][0]["text"]
result = multi_turn_conversation()
Example: Multi-Model Comparison
from honeyhive import HoneyHiveTracer, trace, enrich_span
from openinference.instrumentation.bedrock import BedrockInstrumentor
import boto3
import json
tracer = HoneyHiveTracer.init(project="bedrock-comparison")
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
@trace
def compare_models(prompt: str) -> dict:
"""Compare responses from multiple Bedrock models."""
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
models = {
"claude-sonnet": "anthropic.claude-3-sonnet-20240229-v1:0",
"claude-haiku": "anthropic.claude-3-haiku-20240307-v1:0",
"titan": "amazon.titan-text-express-v1"
}
results = {}
for name, model_id in models.items():
try:
if "anthropic" in model_id:
body = {
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 500,
"messages": [{"role": "user", "content": prompt}]
}
else:
body = {
"inputText": prompt,
"textGenerationConfig": {"maxTokenCount": 500}
}
response = bedrock.invoke_model(
modelId=model_id,
body=json.dumps(body)
)
result = json.loads(response["body"].read())
if "anthropic" in model_id:
results[name] = result["content"][0]["text"]
else:
results[name] = result["results"][0]["outputText"]
except Exception as e:
results[name] = f"Error: {str(e)}"
enrich_span({
"models_compared": list(models.keys()),
"successful": len([r for r in results.values() if not r.startswith("Error")])
})
return results
comparison = compare_models("Explain serverless computing in one paragraph.")
Environment Configuration
# HoneyHive configuration
export HH_API_KEY="your-honeyhive-api-key"
export HH_PROJECT="your-project"
export HH_SOURCE="production"
# AWS configuration
export AWS_ACCESS_KEY_ID="your-aws-access-key"
export AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
# Or use AWS profiles
export AWS_PROFILE="your-profile"
What Gets Traced
With instrumentors initialized, these Bedrock calls are automatically traced:
bedrock.invoke_model() - Model invocation
bedrock.converse() - Converse API
bedrock.invoke_model_with_response_stream() - Streaming
bedrock.list_foundation_models() - Model listing
Captured data includes:
- Model ID and parameters
- Input prompts
- Output responses
- Token usage (where available)
- Latency metrics
- AWS request metadata
- Errors and exceptions
Troubleshooting
Missing Traces
Ensure correct initialization order:
# ✅ Correct
tracer = HoneyHiveTracer.init(project="my-project")
instrumentor = BedrockInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# ❌ Wrong - instrumentor before tracer
instrumentor = BedrockInstrumentor()
instrumentor.instrument() # No tracer_provider!
tracer = HoneyHiveTracer.init(project="my-project")
AWS Credentials
# Option 1: Environment variables
import os
os.environ["AWS_ACCESS_KEY_ID"] = "your-key"
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-secret"
# Option 2: AWS profile
bedrock = boto3.client(
"bedrock-runtime",
region_name="us-east-1"
) # Uses default profile
# Option 3: Explicit credentials
bedrock = boto3.client(
"bedrock-runtime",
region_name="us-east-1",
aws_access_key_id="your-key",
aws_secret_access_key="your-secret"
)
Region Configuration
# Bedrock is available in specific regions
# Common regions: us-east-1, us-west-2, eu-west-1
bedrock = boto3.client(
"bedrock-runtime",
region_name="us-east-1" # Ensure region has Bedrock
)
Import Errors
# For OpenInference
pip install "honeyhive[openinference-bedrock]>=1.0.0rc0"
# For Traceloop
pip install "honeyhive[traceloop-bedrock]>=1.0.0rc0"
Migration Between Instrumentors
From OpenInference to Traceloop:
# Before (OpenInference)
from openinference.instrumentation.bedrock import BedrockInstrumentor
# After (Traceloop) - just change the import
from opentelemetry.instrumentation.bedrock import BedrockInstrumentor
# Rest of the code stays the same!