How to instrument multi-modal pipelines in HoneyHive
HoneyHive’s tracing capabilities extend beyond text-based data, allowing you to capture and analyze multi-modal information in your AI applications. This guide focuses on instrumenting functions that handle multi-modal data, particularly those that return S3 URLs pointing to images, audio, or other non-text assets.
Multi-modal tracing is crucial for applications that process various types of data, such as:
Image generation or analysis
Audio processing
Video content creation or analysis
Document processing with embedded media
By tracing these functions, you can gain insights into how your application handles different data types and how they impact your AI pipeline’s performance and accuracy.
To instrument functions that return S3 URLs for multi-modal data, you’ll use the same tracedecorator as with text-based functions. Here’s how to set it up:
First, ensure you’ve initialized the HoneyHiveTracer:
Python
Copy
Ask AI
from honeyhive import HoneyHiveTracerHoneyHiveTracer.init( api_key=MY_HONEYHIVE_API_KEY, project=MY_HONEYHIVE_PROJECT_NAME, source=MY_SOURCE, # e.g., "prod", "dev", etc. session_name=MY_SESSION_NAME,)
Import and use the trace decorator:
Python
Copy
Ask AI
from honeyhive import trace@tracedef process_image(image_path): # Image processing logic here # ... return "s3://my-bucket/processed-images/image123.jpg"