Curating an Evaluation Dataset
The Importance of a Tailored Evaluation Dataset
While training datasets shape the knowledge and behavior of an LLM, evaluation datasets gauge its effectiveness. Here’s why crafting a specialized evaluation dataset is paramount:
- Precision in Assessment: A custom dataset ensures the model is tested against scenarios most relevant to your application.
- Spotting Blind Spots: A diverse dataset can uncover areas where the model might falter.
- Task Relevance: For niche applications, it’s vital to assess the model’s expertise in that niche.
- Real-world Readiness: Ensure the model’s readiness for real-world challenges, beyond its training data.
- Benchmarking: A consistent evaluation dataset allows for performance comparison across models or iterations.
Crafting the right evaluation dataset is an art. Let’s delve into the methodology.
Crafting Your Evaluation Dataset
-
Determine Evaluation Goals: Understand what you aim to measure. Is it the model’s general knowledge, its domain expertise, or its ability to handle edge cases?
-
Gathering Data:
- Leverage Benchmarks: Utilize existing benchmark datasets as a foundation.
- Simulate Real-world Scenarios: Design scenarios or questions that the model might encounter in actual deployments.
-
Refinement and Pruning:
- Eliminate Redundancies: Ensure each data point tests something unique.
- Clarify Ambiguities: Each data point should have a well-defined correct answer or output.
-
Annotation and Ground Truth:
- Consistent Standards: Ensure that annotations follow a consistent standard.
- Reconciliation: If using multiple annotators, reconcile differences to arrive at a consensus.
-
Challenge the Model: Incorporate a mix of easy, moderate, and hard examples to thoroughly test the model’s capabilities.
-
Continuous Updates: Regularly update the dataset to include new challenges or scenarios, ensuring the evaluation remains current.
Collaboration and Iteration
After crafting the evaluation dataset, continuous refinement is key:
- External Review: Engage external experts to review and challenge the dataset.
- Feedback Loop: Use model evaluations as feedback to refine the dataset further.
- Maintain Transparency: Document the dataset’s origins, criteria, and any changes over time.
- Upload to HoneyHive: Share the dataset with collaborators to gather diverse insights by uploading it to HoneyHive.
Through meticulous crafting and continuous iteration, you can shape an evaluation dataset that offers a true measure of your LLM’s capabilities.
Uploading your dataset to HoneyHive via the UI
Upload the dataset to HoneyHive by creating a new evaluation in the “Evaluations” page. When prompted to “Choose a dataset for evaluation”, click the “Upload” button in the bottom-right.
Uploading your dataset to HoneyHive via the SDK
import honeyhive
from honeyhive.sdk.datasets import create_dataset
honeyhive.api_key = "HONEYHIVE_API_KEY"
# read a file
file = open("data.json", "r")
# uploading a dataset
create_dataset(
name="Simple Dataset",
project="New Project",
purpose="evaluation",
file=file,
description="A simple dataset for testing"
)
Note that data.json
needs to be a list of valid JSON objects. Within each individual JSON object, ground_truth
is a reserved field that is used in evaluations for calculating metrics.
Curating your dataset from production logs in HoneyHive
In addition to uploading complete datasets, you can also easily add individual datapoints from one dataset to another by clicking the relevant datapoint in “Completions” table and clicking the “Add to Dataset” button in the top-right.
You can also add a group of datapoints to a dataset as well. You can filter your datapoints by some criteria, select all datapoints that match that criteria and add those datapoints to a dataset as shown in the following screenshot.