Using the fine-tuning datasets stored in HoneyHive, you can quickly fine-tune OpenAI models like GPT 3.5 Turbo or use your datasets to fine-tune open-source models via 3rd party providers like Together AI, Baseten, Modal, etc.

We typically recommend a sample size of at least couple hundred examples to fine-tune a custom model effectively. In most cases, increasing the size of your dataset sample size results in linear performance improvements.

Fine-tune OpenAI models

  1. Navigate to Fine-Tuning within the Datasets tab in the sidebar to view any saved fine-tuning datasets. Here, you can select your dataset and click Fine-Tune to start the fine-tuning process.

finetuning1

  1. Once in Fine-Tuning, enter the appropriate training parameters for your fine-tuning job. Review your data payload and quickly add any last minute corrections if necessary.

finetuning2

  1. Click Start Fine-Tuning Job to start the fine-tuning process. You can navigate to the Fine-Tuning tab in the sidebar to check the status of your fine-tuning job.
  2. Once the fine-tuning job has been completed, you can navigate to the Playground to test your new model, fork a new variant and evaluate it against your baseline variant within Evaluations.
We recommend fine-tuning multiple models with various data sample sizes and training parameters and benchmarking them against your production baseline variant within Evaluations before deploying your new fine-tuned model to production.