Skip to main content
You’re using an AI coding tool and you want it to know how to work with HoneyHive. Here’s how to give your agent access to up-to-date HoneyHive documentation.

Docs MCP Server

Real-time doc search from your IDE.

llms.txt

Paste a single URL for full doc context.

Page Quick Actions

Copy any page directly into your agent.

Docs MCP Server

HoneyHive documentation includes a built-in Model Context Protocol (MCP) server. When connected, your AI assistant can search and retrieve HoneyHive docs in real-time while generating responses, instead of relying on potentially outdated training data. The HoneyHive docs MCP server is available at:
https://docs.honeyhive.ai/mcp
Once connected, you can ask your AI assistant questions about HoneyHive tracing, evaluations, integrations, and more. It searches the documentation directly to provide accurate, current answers.

Cursor

Navigate to Settings > MCP > Add new global MCP server, then add:
{
  "mcpServers": {
    "honeyhive-docs": {
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}

Claude Code

Run this command in your terminal to add the server to your current project:
claude mcp add --transport http honeyhive-docs https://docs.honeyhive.ai/mcp
To make it available across all projects, add the --scope user flag:
claude mcp add --transport http honeyhive-docs --scope user https://docs.honeyhive.ai/mcp

VS Code / GitHub Copilot

Add the following to your VS Code MCP settings configuration file (.vscode/mcp.json):
{
  "servers": {
    "honeyhive-docs": {
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}

Windsurf

Add the following to your Windsurf MCP configuration:
{
  "mcpServers": {
    "honeyhive-docs": {
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}

Codex CLI

codex mcp add honeyhive-docs --url https://docs.honeyhive.ai/mcp

Claude Desktop

  1. Open Claude Desktop.
  2. Go to Settings > Developer > Edit Config.
  3. Add the HoneyHive docs server to your mcpServers:
{
  "mcpServers": {
    "honeyhive-docs": {
      "url": "https://docs.honeyhive.ai/mcp"
    }
  }
}

ChatGPT

ChatGPT supports MCP through its Apps system. Enable Developer Mode in Settings > Apps & Connectors > Advanced settings, then create a new connector with the MCP server URL: https://docs.honeyhive.ai/mcp. Requires a ChatGPT Pro, Team, Enterprise, or Edu plan.

llms.txt

The llms.txt file provides a structured overview of HoneyHive documentation optimized for LLM consumption. Include the URL in a prompt to give any AI assistant broad context about HoneyHive’s capabilities.
https://docs.honeyhive.ai/llms.txt
For the full documentation content in a single file:
https://docs.honeyhive.ai/llms-full.txt
For example, you might prompt: “Using the docs at https://docs.honeyhive.ai/llms-full.txt, help me add OpenAI tracing to my Python project.”

When to use llms.txt vs. MCP

llms.txtMCP Server
Best forOne-off questions, quick contextOngoing development, IDE integration
How it worksStatic file with doc structure and linksReal-time search and page retrieval
SetupPaste a URLAdd to your IDE config
FreshnessUpdated on each docs deploymentAlways live

Page Quick Actions

Every page in the HoneyHive docs has a contextual menu in the top-right corner with shortcuts for AI tools:
  • Copy as Markdown - paste the full page content directly into any AI assistant
  • View as Markdown - view the raw markdown source of the page
  • Open in ChatGPT - open the page in ChatGPT with full context
  • Open in Claude - open the page in Claude with full context
  • Connect MCP - connect the docs MCP server to your tool
  • Add to Cursor - add the docs MCP server directly to Cursor
  • Add to VS Code - add the docs MCP server directly to VS Code
These actions are useful for quickly sharing context with your AI tool without a full MCP server setup.

Example Prompts

Once you’ve connected HoneyHive docs to your agent, try these prompts:
Search the HoneyHive docs and add OpenAI tracing to my Python
project using OpenInference. Install the required packages and
initialize the HoneyHiveTracer with my API key.
Using the HoneyHive docs, help me set up an experiment
that runs my RAG pipeline against a dataset and scores
results with a custom Python evaluator.
Search HoneyHive docs for how to create a monitoring
dashboard that tracks latency, cost, and error rates
for my production LLM application.