Fetching Deployed Prompts
How to deploy prompts to specific environments and export them for use in your application.
HoneyHive allows you to manage your prompts in HoneyHive instead of your codebase and deploy your saved prompts to specific env
variables. This allows your domain experts to independently iterate on and deploy prompts without needing any technical skills.
This guide will walk through available environments, how to deploy a prompt to a specific environment within the app, and how to export and use these prompts in your application.
Available Environments
Each environment is specific to a project and can be used to automatically fetch the latest deployed prompt version. By default, we offer the below 3 environments:
dev
: For development-related tasks such as testing, debugging, etc.staging
: For staging your prompt changesprod
: For managing your production instance
Deploying Prompts
Expected Time: Less than a minute
To deploy a prompt, simply navigate to Registry
within Studio. Here, you can select any of your saved prompts and deploy it to a specific env
.
Integrating with your application
After creating a desired prompt and deploying it, you’ll need to add it to your codebase to use in development or production. We have two primary suggested flows for being able to export and use your prompts:
- SDK-based export (with caching)
- YAML file export
1. SDK-based export
You can fetch your deployed prompts using our GET Configurations
API. This method allows you to dynamically retrieve the latest version of your prompts directly from HoneyHive.
Basic SDK Usage
Here are examples of how to use the API in Python and TypeScript.
The env
and name
parameters are optional below.
- Fetching all prompts: If only the
project
is specified, all prompts in the project will be returned. - Fetching
prod
deployed version: By settingenv
tooperations.env.PROD
(Python) orEnv.Prod
(JS/TS), the prompt deployed to theprod
environment will be returned. - Fetching prompts deployed to other environments: Specifying a particular environment will fetch the prompt for that specific environment.
- Fetching a specific prompt: Specifying a name will fetch that exact prompt.
Cached SDK export
To reduce the number of API calls and improve performance, we recommend implementing a caching mechanism. Here are examples using LRU (Least Recently Used) cache in both Python and TypeScript:
lru-cache
By implementing caching, you can significantly reduce the number of API calls while still ensuring that your application has access to up-to-date prompt configurations.
2. YAML file export
Another approach to exporting prompts is by saving them as YAML files and including them in your project. This method is useful when you want to version control your prompts with your codebase or when you prefer to have the prompt configurations directly in your codebase.
We recommend running this export flow in your production build process.
Exporting to YAML
Here’s how you can export a prompt configuration to a YAML file using the HoneyHive SDK:
These functions will fetch the prompt configuration using the HoneyHive SDK and save it as a YAML file. You can later import it elsewhere for making calls to an LLM.
Reading YAML Configurations
Once you have exported your prompt configurations to YAML files, you can easily load them in your application. Here’s how you can read the YAML files:
These functions allow you to load the YAML configuration files back into your application, making it easy to use the exported prompt configurations in your code.
Conclusion
Whether you choose the SDK-based approach for real-time updates or the YAML file method for static configurations, HoneyHive provides flexible options for integrating your prompts into your application. Choose the method that best fits your development workflow and application requirements.