Managing Prompts
Test, version and manage your prompts in the Studio.
Playground is a scratch pad to quickly iterate on prompts & “vibe-check” models.
In this guide, you’ll learn how to make the most of the HoneyHive Playground, where you can experiment with new prompts, models, OpenAI functions and external tools.
HoneyHive allows you to define, version and manage your prompt templates and model configurations within each project.
What is the Playground?
The Playground is a UI that connects with your LLMs wherever they are hosted & allows you to quickly iterate on prompts built on top of them.
The way it calls your LLM provider is
- We ask you to configure your provider secrets (that are encrypted & stored in your browser cache)
- Based on the parameters & prompt specified in the UI, we craft an API request for your provider
- We pass the secrets & the request to our proxy service which pings your provider
We trace cost, latency & calculate evaluators automatically on all requests from our proxy.
- If the request was successful, we stream or print the response in the UI
- If the request was unsuccessful, we show a full error description provided by the provider
To get started with the Playground, we will start by configuring a model provider.
Configure a model provider
Expected Time: Few minutes
Steps
Next Steps
Congratulations, now you are ready to create prompts on top of your models in HoneyHive.
Create your first prompt
Expected time: Few minutes
In the following tutorial, we use AI Q&A bot
as the project, you can pick any project you want to create your prompt in instead.
{{
and }}
to denote a dynamic insertion field for a prompt. Dynamic variables are typically useful when inserting inputs from end-users or external context from tools such as vector databases.Version Management
Our first prompts are often simple prototypes that we end up changing very often.
- HoneyHive automatically versions your prompts as you edit your prompt template and test new scenarios.
- A new version is only created automatically when you run a test case against your edited prompt.
Save
in order to save it as a prompt-model configuration.Iterating on a saved prompt
Our Playground support easy forking & saving to track variants you like while you keep changing the prompt.
Expected time: few minutes
Steps
Open a prompt from a previous run
If you want to go back to a prompt you had already run, or open one from a trace that was logged externally, then you can simply click “Open In Playground” from that run’s view.
Expected time: few minutes
Steps
Sharing and Collaboration
To share a saved prompt, simply press the Share
button on the top right of the Playground.
This will copy a link to the saved prompt that you can share with your teammates.
Using OpenAI Functions
- Navigate to Tools in the left sidebar.
- Click
Add Tool
and selectOpenAI functions
. - Define your OpenAI function in a JSON format.
Learn more about OpenAI function schema here.
Integrating Pinecone and SerpAPI
- Navigate to Tools in the left sidebar.
- Click
Add Tool
and selectExternal Tool
. - Choose between SerpAPI and Pinecone in the dropdowns.
- Add your API keys and other parameters specific to your Pinecone index.
Using External Tools in the Playground
- You can access the Playground within the Prompts tab in the left sidebar.
- To use an external tool in your prompt template, copy the tool you’d like to select.
We use
/ToolName{{query_name}}
as the convention to call a tool. - Paste it in your prompt template and start using.
What’s next
Now that you’ve defined some prompt configurations in the Playground, learn more about how to evaluate and monitor different prompt configurations using HoneyHive.