Get API Key

After signing up on the app, you can find your API key in the Settings page under Account.

Install the Go SDK

go get github.com/honeyhive-ai/honeyhive-go-sdk

HoneyHive Proxy Method

You can leverage our prompt CI/CD capabilities and automatically call the current deployed prompt within a specified project without specifying all the parameters. Using this method, we automatically route your requests to the model-prompt configuration that you choose to deploy within the platform.

More documentation can be found on our saved prompt generations API page.

req := operations.TaskCreateGenerationRequest{
    Request: shared.TaskGenerationInput{
        Task: &projectName,
        Inputs: inputs,
    },
}

res, err := honeyhiveClient.Generation.TaskCreateGeneration(ctx, req)

if res.TaskGenerationOutput != nil {
    generationID := res.TaskGenerationOutput.GenerationID
}

Ingest Model Completions

Alternatively, we provide you the option to run model completion requests within your own servers and ingest the response later without using HoneyHive’s proxy server via our logging api. We take in all the typical model provider parameters along with task, source, latency as HoneyHive specific parameters. We recommend tracking request latency on your end if you choose to deploy using this method.

Using this method, you will not be able to use our Prompt CI/CD capabilities within the platform and will need to manually update your prompt, model provider and hyperparamater settings when deploying new variants to production.
package main
​
import (
    "context"
    "github.com/honeyhive-ai/honeyhive-go-sdk"
    "github.com/honeyhive-ai/honeyhive-go-sdk/pkg/models/shared"
    "github.com/honeyhive-ai/honeyhive-go-sdk/pkg/models/operations"
	"fmt"
)

func main() {
    honeyhiveClient := honeyhive.New(honeyhive.WithSecurity(
        shared.Security{
            BearerAuth: shared.SchemeBearerAuth{
                Authorization: "Bearer <API_KEY>",
            },
        },
    ))

    projectName := "PROJECT_NAME"
    source := "SOURCE"
    hyperparameters := shared.HyperParameters{
        MaxTokens: &maxtokens,
        Temperature: &temperature,
        TopP: &topP,
        FrequencyPenalty: &fp,
        PresencePenalty: &pp,
    }
    usage_map := make(map[string]int)
    usage_map["prompt_tokens"] = resp.Usage.PromptTokens
    usage_map["completion_tokens"] = resp.Usage.CompletionTokens
    usage_map["total_tokens"] = resp.Usage.TotalTokens
    ​
    req := operations.IngestSingleGenerationRequest{
        Request: shared.SingleGenerationInput{
            Task: &projectName,
            PromptTemplate: &prompt_template,
            Model: &engine,
            Inputs: inputs,
            Usage: usage_map,
            Source: &source,
            Hyperparameters: &hyperparameters,
            Generation: &resp.Choices[0].Text,
            Latency: &elapsed,
        },
    }

    res, err := honeyhiveClient.Generation.IngestSingleGeneration(ctx, req)

    if res.SingleGenerationOutput != nil {
        generationID := res.SingleGenerationOutput.GenerationID
    }
}

Log User Feedback and Metadata

Using the generation_id that is returned, you can then send arbitrary feedback to HoneyHive using the feedback endpoint.

We recommend providing a unique user identifier along with each request as production best-practice when deploying models to production.
taskName := "PROJECT_NAME"
feedbackJSON := map[string]interface{}{
    "correction": "This is not a test",
    "accepted": true,
    "feedback_provided": true,
    "user_tenant": "Contoso",
    "user_country": "US",
    "user_department": "Engineering"
}

req := operations.CreateFeedbackRequest{
    Request: shared.Feedback{
        Task: &taskName,
        GenerationID: generationID,
        FeedbackJSON: feedbackJSON,
    },
}
​
res, err := s.Feedback.CreateFeedback(ctx, req)