Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.honeyhive.ai/llms.txt

Use this file to discover all available pages before exploring further.

If you’re using an AI coding tool and you want it to know how to work with HoneyHive, here’s how to give your agent access to up-to-date HoneyHive documentation.

Docs MCP Server

Real-time doc search from your IDE.

llms.txt

Paste a single URL for full doc context.

Page Quick Actions

Copy any page directly into your agent.

Docs MCP Server

HoneyHive documentation includes a built-in Model Context Protocol (MCP) server. When connected, your AI assistant can search and retrieve HoneyHive docs in real-time while generating responses, instead of relying on potentially outdated training data. The HoneyHive docs MCP server is available at:
https://docs.honeyhive.ai/mcp
The server exposes a search_honey_hive_ai_docs tool that performs semantic search across all HoneyHive documentation, returning relevant content with direct links to the source pages. Once connected, you can ask your AI assistant questions about HoneyHive tracing, evaluations, integrations, and more. It searches the documentation directly to provide accurate, current answers.
The docs MCP server indexes both the current (v2) and legacy (v1) documentation. When prompting, tell your agent to use the v2 docs so it pulls from the current SDK and UI references (see Example Prompts).

Cursor

Open Cursor Settings with Cmd + Shift + J (Mac) or Ctrl + Shift + J (Windows/Linux), click Tools & MCP in the sidebar, then click New MCP Server. This opens ~/.cursor/mcp.json. Add:
{
  "mcpServers": {
    "honeyhive-docs": {
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}
Use .cursor/mcp.json in your project root instead if you want to commit the config and share it with your team.

Claude Code

Run this command in your terminal to add the server to your current project:
claude mcp add --transport http honeyhive-docs https://docs.honeyhive.ai/mcp
To make it available across all projects, add the --scope user flag:
claude mcp add --transport http honeyhive-docs --scope user https://docs.honeyhive.ai/mcp

VS Code / GitHub Copilot

Add the following to your VS Code MCP settings configuration file (.vscode/mcp.json):
{
  "servers": {
    "honeyhive-docs": {
      "type": "http",
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}
Run MCP: Open User Configuration from the Command Palette to register the server across all workspaces instead of a single project.

Windsurf

Add the following to your Windsurf MCP configuration:
{
  "mcpServers": {
    "honeyhive-docs": {
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}

Codex CLI

Run this command to register the server in ~/.codex/config.toml:
codex mcp add honeyhive-docs --url https://docs.honeyhive.ai/mcp
The --url flag requires Codex CLI with streamable HTTP support. Upgrade with npm install -g @openai/codex if codex mcp add --url is not recognized.

Claude Desktop

Claude Desktop connects to remote MCP servers through custom connectors in the app, not through claude_desktop_config.json (which only supports local stdio servers).
  1. Open Claude Desktop and go to Settings > Connectors.
  2. Click Add custom connector.
  3. Set the name to honeyhive-docs and the remote MCP server URL to https://docs.honeyhive.ai/mcp, then click Add.
On Team and Enterprise plans, an Owner must first add the connector under Organization settings > Connectors before members can enable it. See Anthropic’s custom connectors guide for details.

ChatGPT

ChatGPT connects to MCP servers through custom connectors:
  1. Go to Settings > Apps & Connectors > Advanced settings and turn on Developer mode.
  2. Return to Settings > Apps & Connectors and click Create.
  3. Fill in a name (e.g. HoneyHive docs) and set the connector URL to https://docs.honeyhive.ai/mcp, then click Create.
Custom connectors with full MCP support are available on ChatGPT Business, Enterprise, and Education plans, and only workspace admins or owners can enable developer mode. See OpenAI’s connector setup guide for details.

llms.txt

The llms.txt file provides a structured overview of HoneyHive documentation optimized for LLM consumption. Include the URL in a prompt to give any AI assistant broad context about HoneyHive’s capabilities.
https://docs.honeyhive.ai/llms.txt
For the full documentation content in a single file:
https://docs.honeyhive.ai/llms-full.txt
For example, you might prompt: “Using the docs at https://docs.honeyhive.ai/llms-full.txt, help me add OpenAI tracing to my Python project.”

When to use llms.txt vs. MCP

llms.txtMCP Server
Best forOne-off questions, quick contextOngoing development, IDE integration
How it worksStatic file with doc structure and linksReal-time search and page retrieval
SetupPaste a URLAdd to your IDE config
FreshnessUpdated on each docs deploymentAlways live

Page Quick Actions

Every page in the HoneyHive docs has a contextual menu in the top-right corner with shortcuts for AI tools:
  • Copy as Markdown - paste the full page content directly into any AI assistant
  • View as Markdown - view the raw markdown source of the page
  • Open in ChatGPT - open the page in ChatGPT with full context
  • Open in Claude - open the page in Claude with full context
  • Connect MCP - connect the docs MCP server to your tool
  • Add to Cursor - add the docs MCP server directly to Cursor
  • Add to VS Code - add the docs MCP server directly to VS Code
These actions are useful for quickly sharing context with your AI tool without a full MCP server setup.

Example Prompts

Once you’ve connected HoneyHive docs to your agent, try these prompts. Each explicitly scopes the search to the current (v2) docs so the agent doesn’t pick up pages from the legacy v1 tree.
Search the HoneyHive v2 docs (pages under /v2/) and add OpenAI
tracing to my Python project using OpenInference. Install the
required packages and initialize the HoneyHiveTracer with my
API key.
Using the HoneyHive v2 docs (pages under /v2/), help me set up
an experiment that runs my RAG pipeline against a dataset and
scores results with a custom Python evaluator.
Search the HoneyHive v2 docs (pages under /v2/) for how to
create a monitoring dashboard that tracks latency, cost, and
error rates for my production LLM application.