Online Experiments & A/B Tests
Learn how to A-B test anything online with HoneyHive
The complete schema flexibility of HoneyHive allows you to run online experiments on any part of your system and analyze the results in HoneyHive.
You can do this by filtering data for your specific feature flag and segmenting your data by config properties like version
to analyze how different prompt or model versions perform.
How to run online experiments
Preqrequisites:
- You have already set up HoneyHive in your code as described here.
Expected Time: 5-10 minutes
Set a metadata field to track the online experiment
Set a metadata field to track the experiment ID.
Feel free to use the experiment_id
from any pre-existing experimentation tool you are using (eg: Statsig or Launchdarkly).
Configure user feedback on the trace
Configure user feedback on the trace to track the experiment results.
Analyze the results
Analyze the results of the experiment using the HoneyHive dashboard.
You can
- Pick the
Session
orEvent
view depending on the level of granularity you need. - Filter by the
prompt-experiment-id
metadata field to only look at the data from the experiment. - Chart the
liked
field with whichever aggregation function you are interested in measuring - Group by the
config.version
field to see the results across your control and treatment groups.