Comprehensive Guide to Tracing AWS Bedrock with HoneyHive

AWS Bedrock gives you access to powerful foundation models (FMs) from Amazon and leading AI companies. This guide demonstrates how to implement tracing with HoneyHive to monitor and evaluate your AWS Bedrock applications.

Introduction to Tracing Types

HoneyHive provides four primary types of traces that work together to give you comprehensive visibility into your AWS Bedrock applications:

1. Model Invocation Traces

Model invocation traces capture each interaction with an AWS Bedrock model, recording:

  • Input prompts and parameters
  • Output responses
  • Latency and token usage metrics
  • Error information (if any occurs)
  • Model-specific parameters

In our cookbook examples, model invocation traces are automatically captured when you make AWS Bedrock API calls like invoke_model and converse.

2. Function/Span Traces

Function traces (or spans) track the execution of specific functions in your code:

  • Function inputs and outputs
  • Execution duration
  • Parent-child relationships between functions
  • Custom metrics you define

The @trace decorator is used to create function traces, as shown in all examples in our cookbook.

3. Session Traces

Session traces represent an entire user interaction or workflow:

  • Group all related model invocations and function traces
  • Maintain contextual information across multiple operations
  • Provide a complete picture of a user journey or request

Sessions are created when you initialize the HoneyHive tracer at the beginning of your application.

4. Custom Event Traces

Custom event traces let you track specific events or add metrics to any trace:

  • Business-specific metrics
  • User feedback events
  • Custom application states
  • Performance metrics

Quickstart Guide

Installation

First, install the required dependencies:

pip install -r requirements.txt

The requirements.txt file includes:

boto3>=1.28.0
honeyhive>=0.1.0
python-dotenv>=1.0.0

Configuration

Create a .env file based on the .env.example template:

# AWS Credentials
AWS_ACCESS_KEY_ID=your_aws_access_key
AWS_SECRET_ACCESS_KEY=your_aws_secret_key
AWS_REGION=us-east-1

# HoneyHive Configuration
HONEYHIVE_API_KEY=your_honeyhive_api_key

Basic Usage Pattern

The basic pattern for tracing AWS Bedrock with HoneyHive follows these steps:

  1. Initialize the HoneyHive tracer
  2. Decorate functions with @trace
  3. Make AWS Bedrock API calls
  4. Optionally add custom metrics
  5. Traces are automatically sent to HoneyHive

Detailed Examples

Listing Bedrock Models with Tracing

The bedrock_list_models.py example demonstrates:

  • Initializing the HoneyHive tracer
  • Using the @trace decorator for function tracing
  • Making AWS Bedrock API calls to list available foundation models

Key code sections:

# Initialize HoneyHive tracer
HoneyHiveTracer.init(
    api_key=os.getenv("HONEYHIVE_API_KEY"),
    project="aws-bedrock-examples",
    source="dev",
    session_name="list-bedrock-models"
)

@trace
def list_foundation_models(bedrock_client):
    try:
        response = bedrock_client.list_foundation_models()
        models = response["modelSummaries"]
        logger.info("Got %s foundation models.", len(models))
        return models
    except Exception as e:
        logger.error("Couldn't list foundation models: %s", str(e))
        raise

Text Generation with InvokeModel API

The bedrock_invoke_model.py example shows:

  • Tracing text generation with the InvokeModel API
  • Structured error handling with tracing
  • Parameter configuration for model invocation

Key code sections:

@trace
def invoke_bedrock_model(model_id, prompt, max_tokens=512, temperature=0.5, top_p=0.9):
    # Create an Amazon Bedrock Runtime client
    bedrock_runtime = boto3.client(
        "bedrock-runtime", 
        region_name=os.getenv("AWS_REGION", "us-east-1")
    )
    
    # Format the request payload using the model's native structure
    native_request = {
        "inputText": prompt,
        "textGenerationConfig": {
            "maxTokenCount": max_tokens,
            "temperature": temperature,
            "topP": top_p
        },
    }
    
    # Invoke the model and handle the response
    # [... implementation details ...]

Conversation Tracing with Converse API

The bedrock_converse.py example demonstrates:

  • Tracing multi-turn conversations
  • Using the more advanced Converse API
  • Maintaining conversation context across turns

Key code sections:

@trace
def multi_turn_conversation(model_id):
    # Create an Amazon Bedrock Runtime client
    bedrock_runtime = boto3.client(
        "bedrock-runtime", 
        region_name=os.getenv("AWS_REGION", "us-east-1")
    )
    
    # Start with an empty conversation
    conversation = []
    
    # First turn
    user_message = "What are three key benefits of cloud computing?"
    conversation.append({
        "role": "user",
        "content": [{"text": user_message}],
    })
    
    # Get the model's response and build the conversation history
    # [... implementation details ...]
    
    # Second turn
    user_message = "Can you elaborate on scalability?"
    # [... remaining implementation ...]

Conclusion

The AWS Bedrock + HoneyHive cookbook demonstrates how to implement comprehensive tracing for your AWS Bedrock applications. By following the patterns in these examples, you can gain visibility into your model performance, track user interactions, and gather metrics to improve your AI applications.

For more information: