HoneyHive combines logs, metrics, and traces into a unified data model, leveraging the concept of high cardinality to provide a comprehensive view of your AI system’s performance and behavior. By consolidating these traditionally separate observability pillars into a single, flexible event-based structure, we enable developers to gain deeper insights and perform more sophisticated analyses. This approach offers several key benefits:

  • Unified Context: Each event captures not just raw data, but also the surrounding context, allowing for more meaningful correlations and insights.
  • Flexible Querying: High cardinality enables precise filtering and aggregation across multiple dimensions, facilitating complex analyses and troubleshooting.
  • Scalability: The event-based model scales efficiently with the growing complexity of AI systems and the increasing volume of observability data.
  • Faster Debugging: The ability to trace a request through various components while simultaneously accessing logs and metrics streamlines the debugging process.

Introducing Events

The base unit of data in HoneyHive is called an event, which represents a span in a trace. A root event in a trace is of the type session, while all non-root events in a trace can be of 3 core types - model, tool and chain.

All events have a parent-child relationship, except session event, which being a root event does not have any parents.
  • session: A root event used to group together multiple model, tool, and chain events into a single trace. This is achieved by having a common session_id across all children.
  • model events: Used to track the execution of any LLM requests.
  • tool events: Used to track execution of any deterministic functions like requests to vector DBs, requests to an external API, regex parsing, document reranking, and more.
  • chain events: Used to group together multiple model and tool events into composable units that can be evaluated and monitored independently. Typical examples of chains include retrieval pipelines, post-processing pipelines, and more.

Here’s a visual representation of the event hierarchy:

Session Events

Session events are used to track the execution of your application. These can be used to capture

  • Session configuration like the application version, environment, etc.
  • Session metrics like session latency, session throughput, etc.
  • Session properties like user id, country, tier, etc.
  • Session feedback like overall session feedback, etc.

Schema for Session Events

Root FieldFieldTypeDescriptionReserved
configapp_versionstringThe version of the LLM application currently running.No
source-stringThe environment/deployment context (production, staging, etc.).No
sessionsession_idstringUnique identifier for the session/interaction.No
start_timeNumberMinimum UTC timestamp (ms) of start_time in session hierarchy.No
end_timeNumberMaximum UTC timestamp (ms) of end_time in session hierarchy.No
durationNumberCalculated difference between end_time and start_time (ms).No
metadatanum_eventsNumberTotal number of events captured during the session.Yes
num_model_eventsNumberNumber of model-related events (LLM requests) in session.Yes
has_feedbackBooleanIndicates if session contains user feedback events.Yes
costNumberTotal LLM usage cost based on provider’s pricing model.Yes
total_tokensNumberTotal tokens processed (input + output).Yes
prompt_tokensNumberTokens in user prompts/input.Yes
completion_tokensNumberTokens in LLM-generated responses.Yes
user_propertiesuser_idstringUnique identifier for the user.No
user_tierstringUser subscription tier (free/pro).No
user_tenantstringTenant/organization for multi-tenant applications.No

Properties marked as “Reserved” in the schema are automatically calculated and managed internally by HoneyHive’s auto-tracing system.

Example for Session Events

Here’s an example session event:

{
  "source": "evaluation",
  "project_id": "65e0fc2d6a2eb95f55a92cbc",
  "session_id": "d22c2b1d-b2cf-4593-b489-bb9ed2841d13",
  "event_id": "d22c2b1d-b2cf-4593-b489-bb9ed2841d13",
  "parent_id": null,
  "children_ids": [
    "441de3d0-5e73-4351-ad05-5c60886937d1",
    "15e41853-ff4e-4355-a691-a4d366b3635e"
  ],
  "event_type": "session",
  "event_name": "Ramp Docs Assistant",
  "start_time": 1710161932.7,
  "end_time": 1710147613.894,
  "duration": 80509.507,
  "config": {
    "app_version": "1.0.1"
  },
  "inputs": {
    "question": "How do I build an integration using Ramp API?",
    "chat_history": [
      {
        "role": "system",
        "content": "\nAnswer the user's question only using provided context. Don't lie.\n\nContext: Getting started\nWelcome to the Ramp API. Use the Ramp API to access transactions, issue cards, invite users, and so on.\n\nWe recommend getting started by connecting a new app and going through the request authorization documentation.\n\nFor Ramp developer partners\nIf you are a Ramp partner and want to offer your application to other Ramp customers, please contact your Ramp liaison and we will help set up your application.\n\n\nEnvironments\nThe API is available in two environments that can be accessed by sending requests to different hosts.\n\nEnvironment\nHost\nOpenAPI spec\nDescription\nProduction\nhttps://api.ramp.com\nProduction spec\nUse our production environment to go live with your application.\nSandbox\nhttps://demo-api.ramp.com\nSandbox spec ↗\nFill out this form ↗ to request a sandbox. A sandbox is a full-fledged environment in which you can explore different API endpoints and test your application.\n\n\nContact us\nHave feedback, questions, or ideas? Get in touch via email at developer-support@ramp.com ↗.\n\n\n\nRate limiting\nWe rate limit requests to preserve availability responsibly. The current limit (subject to change) is 200 requests, and gets refreshed in a 10 second rolling window.\n\nWhen the limit is reached, API calls will start getting 429 Too Many Requests responses.\n\nAfter a minute, the request limit will be replenished and you'll be able to make requests again. Note that any API calls made during this window will restart the clock, delaying the replenishment.\n\nPlease contact your Ramp liaison if you would like to request a limit increase for your account.\n\n\n\nApp connection\nAdmin user privileges required\nPlease note that only business admin or owner may register and configure the application. It is not recommended to downgrade the admin that created the app to a non-admin role.\n\n\nRegistering your application in the Ramp developer dashboard is the first step of building an integration based on Ramp API.\n\n\nFrom the Ramp developer ↗ settings page, click on Create new app to register a new application. Provide app name and app description, sign the Terms of service ↗, and click Create app.\n\n\nNow you have registered a new application. Click into it and configure the following parameters:\n\nClient ID and client secret: Credentials for your application; store securely.\nApp name and description\nGrant types: A list of grant types that the application may use to get access token. See authorization guide for more information.\nScopes: Defines scopes that may be granted to access token.\nRedirect URIs: A list of URIs telling Ramp where to send back the users in the authorization process.\nRedirect URI format\nNote that redirect URIs must either use https protocol or be in localhost.\n\n✅ https://example.com/callback is valid\n❎ http://example.com/callback is invalid\n✅ http://localhost:8000/callback is valid\n\n\n\n\nOAuth 2.0\nRamp API uses the OAuth 2.0 protocol ↗ to handle authorization and access control.\n\nWhich grant type should you use?\nIf you are a Ramp customer and your application only accesses your own Ramp data, then you can use either client credentials grant or authorization code grant. If your application is used by other Ramp customers, the authorization code grant is required.\n\nClient Credentials Grant\nClient Credentials ↗ grant can be used to get an access token outside of the context of a user. It is typically used by applications to directly access their own resources, not on behalf of a user.\n\nTo obtain a token, make a request to POST /developer/v1/token. You must include an Authorization header containing a base-64 representation of client_id:client_secret.\n\n\nShell\n\nJavaScript\n\nPython\n\ncurl --location --request POST 'https://api.ramp.com/developer/v1/token' \\\n    --header 'Authorization: Basic <base64-encoded client_id:client_secret>' \\\n    --header 'Content-Type: application/x-www-form-urlencoded' \\\n    --data-urlencode 'grant_type=client_credentials' \\\n    --data-urlencode 'scope=business:read transactions:read' \nThe response JSON payload contains a ready-to-use access_token. The Client Credentials Grant does not produce refresh tokens - you manually obtain new access tokens before the existing ones expire.\n\nAuthorization Code Grant\nThere are three parties involved in the Authorization Code flow -- the client (your application), the server (Ramp) and the user (data owner). The overall flow follows these steps:\n\nYour application sends the user to authenticate with Ramp.\nThe user sees the authorization prompt and approves the app's request for data access.\nThe user is redirected back via a redirect_uri with a temporary authorization_code.\nYour application exchanges the authorization_code for an access_token.\nRamp verifies the params and returns an access_token.\nYour application gets a new access_token with the refresh_token."
      },
      {
        "role": "user",
        "content": "How do I build an integration using Ramp API?"
      }
    ]
  },
  "outputs": {
    "content": "To build an integration using Ramp API, you need to:\n\nget started\nwith Ramp API\n\nRecommended steps:\n\n1. register a new application in the Ramp developer dashboard.\n2. configure parameters for the application, such as client ID, client secret, grant types, scopes, and redirect URIs.\n3.Obtain an access token to use the Ramp API on behalf of the user\nReference: OAuth 2.0 Ramp API"
  },
  "metrics": {
    "output_length": 394
  },
  "feedback": {
    "rating": 5
  },
  "metadata": {
    "num_events": 2,
    "num_model_events": 1,
    "has_feedback": true,
    "cost": 0,
    "total_tokens": 305,
    "prompt_tokens": 203,
    "completion_tokens": 102
  },
  "user_properties": {
    "user_id": "user_123",
	"user_tier": "free"
  },
  "error": null,
}

Model Events

Model events represent a request made to an LLM. These can be used to capture

  • Model configuration like model name, model hyperparameters, prompt template, etc.
  • Model metrics like completion token count, cost, tokens per second, etc.
  • API-level metrics like request latency, rate limit errors, etc.

Schema for Model Events

Root FieldFieldTypeDescriptionReferenceCritical
configmodelStringThe name or identifier of the LLM model being used for the request.Yes
providerStringThe provider or vendor of the LLM model (e.g., Anthropic, OpenAI, etc.).Based on LiteLLM’s list of providersYes
temperatureNumberThe temperature hyperparameter value used for the LLM, which controls the randomness or creativity of the generated output.Yes
max_tokensNumberThe maximum number of tokens allowed to be generated by the LLM for the current request.Yes
top_pNumberThe top-p sampling hyperparameter value used for the LLM, which controls the diversity of the generated output.Yes
top_kNumberThe top-k sampling hyperparameter value used for the LLM, which controls the diversity of the generated output.Yes
templateArrayThe prompt template or format used for structuring the input to the LLM.Yes
typeStringType of model request - “chat” or “completion”.Yes
toolsArrayArray of OpenAI compatible tool list.OpenAI API - Function CallingYes
tool_choiceStringTool selection choice.Yes
frequency_penaltyNumberControls the model’s likelihood to repeat information.Yes
presence_penaltyNumberControls the model’s likelihood to introduce new information.Yes
stop_sequencesArrayArray of strings that will cause the model to stop generating.Yes
is_streamingBooleanBoolean indicating if the response is streamed.Yes
repetition_penaltyNumberControls repetition in the model’s output.Yes
userStringPerson who created the prompt.No
headersObjectObject containing request headers.No
decoding_methodStringString specifying the decoding method.No
random_seedNumberNumber used for reproducible outputs.No
min_new_tokensNumberMinimum number of new tokens to generate.No
{custom}AnyAny additional configuration properties to trackNo
inputschat_historyArrayThe messages or context provided as input to the LLM, typically in a conversational or chat-like format.OpenAI API - Chat MessagesYes
functionsObjectOpenAI compatible functions schema.OpenAI API - Function CallingNo
nodesArrayArray of strings - text chunks from retrievers.No
chunksArrayArray of strings - text chunks from retrievers.No
{custom}AnyAny arbitrary input properties to trackNo
outputschoicesArrayArray of OpenAI compatible choices schema.OpenAI API - Chat CompletionYes
roleStringThe role or perspective from which the LLM generated the response (e.g., assistant, user, system).No
contentStringThe actual response message generated by the LLM.No
{custom}AnyAny additional output properties to trackNo
metadatatotal_tokensNumberThe total number of tokens in the LLM’s response, including the prompt and completion.Yes
completion_tokensNumberThe number of tokens in the generated completion or output from the LLM.Yes
prompt_tokensNumberThe number of tokens in the prompt or input provided to the LLM.Yes
costNumberThe cost or pricing information associated with the LLM request, if available.Yes
system_fingerprintStringSystem fingerprint string.No
response_modelStringResponse model string.No
status_codeNumberHTTP status code of the request.No
{custom}AnyAny additional metadata propertiesNo
metrics{custom}AnyAny custom metrics or performance indicatorsNo
feedback{custom}AnyAny end-user provided feedbackNo
duration-NumberThe total time taken for the LLM request, measured in milliseconds, which can help identify performance bottlenecks or slow operations.No
error-StringAny errors, exceptions, or error messages that occurred during the LLM request, which can aid in debugging and troubleshooting.No

Properties marked as reserved are required by HoneyHive for core functionality:

  • Model configuration, inputs, and outputs properties are used for rendering and replaying requests in the HoneyHive playground
  • Token counts and cost metadata are used for aggregating session-level analytics

All other properties are recommendations based on our auto-tracing system and can be customized based on your needs.

Example for Model Events

Here’s an example model event:

{
  "source": "evaluation",
  "project_id": "65e0fc2d6a2eb95f55a92cbc",
  "event_id": "fead4996-5bec-4710-bc71-c1f97d311782",
  "parent_id": "397c9cbc-297f-42e9-bc1d-b2b0db850df5",
  "session_id": "397c9cbc-297f-42e9-bc1d-b2b0db850df5",
  "children_ids": [],
  "event_name": "Ramp Docs Answerer",
  "event_type": "model",
  "config": {
    "model": "mistralai/mistral-7b-instruct:free",
    "provider": "openrouter",
    "template": [
      {
        "role": "system",
        "content": "\nAnswer the user's question only using provided context. Don't lie.\n\nContext: {{context}}\n    "
      },
      {
        "role": "user",
        "content": "{{question}}"
      }
    ]
  },
  "inputs": {
    "question": "How do I find all the limits that have been set?",
    "context": "Search documentation\nOverview\nGetting started\nRate limiting\nConventions\nAccounting setup\nError Codes\nChangelog\nAuthorization\nApp connection\nRequest authorization\nOAuth scopes\nREST API\nAccounting\nAccounting Connections\nBills\nBusiness\nCard Programs\nCards\nCashbacks\nDepartments\nEntities\nLeads\nLedger Accounts\nLimits\nList limits\nCreate a limit\nFetch deferred task status\nFetch a limit\nUpdate a limit\nTerminate a limit\nSuspend a limit\nUnsuspend a limit\nLocations\nMemos\nMerchants\nReceipt Integrations\nReceipts\nReimbursements\nSpend Programs\nStatements\nToken\nTransactions\nTransfers\nUsers\nVendors\n\nSwitch to Light theme\nLimits\nList limits\nOAuth scopes\nlimits:read\nRequest Schemas\nRequest Body\nThis request has no body.\nRequest query string parameters\nentity_id string<uuid>\noptional\nspend_program_id string<uuid>\noptional\nuser_id string<uuid>\noptional\nstart string<uuid>\noptional\npage_size integer\noptional\nResponse Schemas\nHTTP 200\n\ndata array<object>\nrequired\n\npage object\nrequired\nGET /developer/v1/limits\n\nShell\n\nJavascript\n\nPython\n\ncurl \\\n  -H \"Accept: application/json\" \\\n  -H \"Authorization: Bearer $RAMP_API_TOKEN\" \\\n    'https://api.ramp.com/developer/v1/limits'\nSample response\nHTTP 200\n{\n  \"data\": [\n    {\n      \"balance\": {\n        \"cleared\": 65,\n        \"pending\": 35,\n        \"total\": 100\n      },\n      \"cards\": [\n        {\n          \"card_id\": \"a40a6ce8-70d4-4d06-91e1-0728ad9bbe39\"\n        }\n      ],\n      \"display_name\": \"T&E\",\n      \"entity_id\": \"c18d9d2e-964f-476d-8bb3-9ac078f00e11\",\n      \"has_program_overridden\": false,\n      \"id\": \"d8135cfe-0396-4b2d-b2cf-ad809fb04731\",\n      \"permitted_spend_types\": {\n        \"primary_card_enabled\": true,\n        \"reimbursements_enabled\": false\n      },\n      \"restrictions\": {\n        \"auto_lock_date\": null,\n        \"categories_whitelist\": [\n          35\n        ],\n        \"interval\": \"MONTHLY\",\n        \"limit\": 500,\n        \"next_interval_reset\": \"2022-12-01T00:00:00+00:00\",\n        \"start_of_interval_date\": \"2022-11-01T00:00:00+00:00\",\n        \"temporary_limit\": null,\n        \"transaction_amount_limit\": 200,\n        \"vendor_blacklist\": [\n          61\n        ]\n      },\n      \"spend_program_id\": \"3a5b1f62-988f-4190-bf31-b7ae87c5dfee\",\n      \"state\": \"ACTIVE\",\n      \"suspension\": {\n        \"acting_user_id\": \"e9186c3a-4650-48ef-aee4-56b77f5019bd\",\n        \"inserted_at\": \"2022-11-03T00:00:00+00:00\",\n        \"suspended_by_ramp\": false\n      },\n      \"users\": [\n        {\n          \"user_id\": \"2ba219ba-5867-453f-bec2-b8d0414b7f75\"\n        }\n      ]\n    }\n  ],\n  \"page\": {\n    \"next\": \"https://api.ramp.com/developer/v1/<resources>?<new_params>\"\n  }\n}\nCreate a limit\nLimit may either be created with spend program id (can provide display name and spending restrictions, cannot provide payment restrictions) or without (must provide display name, spending restrictions, and payment restrictions).\n\nOAuth scopes\nlimits:write\nRequest Schemas\nRequest Body\ndisplay_name string\noptional\nCosmetic display name of the limit.\n\n\nfulfillment object\noptional\nFulfillment details of the limit's card.\n\nidempotency_key string\nrequired\nAn idempotency key is a unique value generated by the client which the server uses to recognize subsequent retries of the same request. To avoid collisions, we encourage clients to use random generated UUIDs.\n\n\npermitted_spend_types object\noptional\nSpecifies the permitted spend types.\n\nspend_program_id string<uuid>\noptional\nThe id of the associated spend program.\n    "
      },
      {
        "role": "user",
        "content": "How do I find all the limits that have been set?"
      }
    ]
  },
  "outputs": {
    "content": "To find all the limits, you can use the Ramp API. You can make an HTTP GET request to \"https://api.ramp.com/developer/v1/limits\". This request will return a list of limits in response. The response will contain an array of objects, each representing a limit, and will also include additional metadata, such as the page of results and the total number ofles.\n\nFor more information about the Ramp API, and how to use it, please refer to the Ramp API documentation. The documentation provides details on the request and response formats, as well as information on each of the available operations and their parameters."
  },
  "start_time": 1710147521.798,
  "end_time": 1710147531.367,
  "duration": 9569.497,
  "metrics": {
    "Answer Faithfulness": 4.0,
    "Answer Faithfulness_explanation": "The AI assistant's answer provides a clear and accurate explanation of the two environments available for the Ramp API: Production and Sandbox. It correctly mentions that API calls in the Production environment should be directed to "https://api.ramp.com" and that this environment is intended for releasing the application to the public. It also correctly states that API calls in the Sandbox environment should be directed to "https://demo-api.ramp.com" and that this environment is for exploring different API endpoints and testing applications.",
    "Number of words": 100
  },
  "feedback": {},
  "metadata": {
    "completion_length": 139
  },
  "user_properties": {},
  "error": null,
}

Tool Events

When your LLM application interacts with external APIs, databases, or vector databases like Pinecone, you can instrument these interactions to evaluate performance, debug issues, and gain insights. Tool events are used to track the execution of anything other than the model. These can be used to capture

  • Tool configuration like vector index name, vector index hyperparameters, any internal tool configuration, etc.
  • Tool metrics like retrieved chunk similarity, internal tool response validation, etc.
  • API-level metrics like request latency, index errors, internal tool errors, etc.

Schema for Tool Events

The tool event represents an interaction with an external resource. Send the following fields:

Root FieldFieldTypeDescriptionReserved
configproviderstringThe name of the external service provider offering vector database, API, or other relevant services (e.g., Pinecone, Weaviate, etc.).No
instancestringThe specific instance or deployment name of the service within the provider’s infrastructure, allowing for differentiation between multiple instances or deployments.No
embedding_modelstringThe name or identifier of the embedding model used for calculating vector similarity, which is particularly relevant for vector databases or services that rely on vector representations of data.No
chunk_sizeintegerThe size (in characters or tokens) of the chunks into which data is split before being converted into vectors, if applicable to the service being used. This is important for services that operate on chunked data.No
chunk_overlapintegerThe amount of overlap (in characters or tokens) between consecutive chunks of data, if applicable to the service being used. This is also relevant for services that operate on chunked data with overlapping segments.No
db_vendorstringVector database provider name.No
{custom}AnyAny additional configuration properties to trackNo
inputstop_kintegerThe number of top-ranked or most similar results to be retrieved from the vector database or service during a similarity search or ranking operation.No
querystringThe query string, vector representation, or any other input data used for retrieval, search, or processing by the external service.No
urlstringExternal API URL.No
{custom}AnyAny arbitrary input properties to trackNo
outputschunksarrayThe data chunks, documents, or any other output retrieved or obtained from the external service as a result of the query or operation performed.No
scoresarray<number>The similarity scores, relevance scores, or any other scoring metrics associated with the retrieved chunks or documents, if applicable to the service being used.No
nodesarray<string>Text chunks from retrievers.No
{custom}AnyAny additional output properties to trackNo
metricsread_unitsnumberVector Database Utilization metric.No
write_unitsnumberVector Database Utilization metric.No
{custom}AnyAny custom metrics or performance indicatorsNo
metadataoperationIdstringOperation identifier.No
{custom}AnyAny additional metadata propertiesNo
duration-integerThe total time taken for the request or interaction with the external service, measured in milliseconds, which can be useful for identifying performance bottlenecks or slow operations.No
error-stringAny errors, exceptions, or error messages that occurred during the retrieval request or interaction with the external service, which can aid in debugging and troubleshooting.No
feedback{custom}AnyAny end-user provided feedbackNo

Example for Tool Events

Here’s an example tool event:

{
  "source": "evaluation",
  "project_id": "65e0fc2d6a2eb95f55a92cbc",
  "session_id": "d22c2b1d-b2cf-4593-b489-bb9ed2841d13",
  "event_id": "441de3d0-5e73-4351-ad05-5c60886937d1",
  "parent_id": "d22c2b1d-b2cf-4593-b489-bb9ed2841d13",
  "children_ids": [],
  "event_name": "Ramp Docs Retriever",
  "event_type": "tool",
  "config": {
    "provider": "pinecone"
  },
  "inputs": {
    "question": "How do I build an integration using Ramp API?"
  },
  "outputs": {
    "content": "Getting started\nWelcome to the Ramp API. Use the Ramp API to access transactions, issue cards, invite users, and so on.\n\nWe recommend getting started by connecting a new app and going through the request authorization documentation.\n\nFor Ramp developer partners\nIf you are a Ramp partner and want to offer your application to other Ramp customers, please contact your Ramp liaison and we will help set up your application.\n\n\nEnvironments\nThe API is available in two environments that can be accessed by sending requests to different hosts.\n\nEnvironment\nHost\nOpenAPI spec\nDescription\nProduction\nhttps://api.ramp.com\nProduction spec\nUse our production environment to go live with your application.\nSandbox\nhttps://demo-api.ramp.com\nSandbox spec ↗\nFill out this form ↗ to request a sandbox. A sandbox is a full-fledged environment in which you can explore different API endpoints and test your application.\n\n\nContact us\nHave feedback, questions, or ideas? Get in touch via email at developer-support@ramp.com ↗.\n\n\n\nRate limiting\nWe rate limit requests to preserve availability responsibly. The current limit (subject to change) is 200 requests, and gets refreshed in a 10 second rolling window.\n\nWhen the limit is reached, API calls will start getting 429 Too Many Requests responses.\n\nAfter a minute, the request limit will be replenished and you'll be able to make requests again. Note that any API calls made during this window will restart the clock, delaying the replenishment.\n\nPlease contact your Ramp liaison if you would like to request a limit increase for your account.\n\n\n\nApp connection\nAdmin user privileges required\nPlease note that only business admin or owner may register and configure the application. It is not recommended to downgrade the admin that created the app to a non-admin role.\n\n\nRegistering your application in the Ramp developer dashboard is the first step of building an integration based on Ramp API.\n\n\nFrom the Ramp developer ↗ settings page, click on Create new app to register a new application. Provide app name and app description, sign the Terms of service ↗, and click Create app.\n\n\nNow you have registered a new application. Click into it and configure the following parameters:\n\nClient ID and client secret: Credentials for your application; store securely.\nApp name and description\nGrant types: A list of grant types that the application may use to get access token. See authorization guide for more information.\nScopes: Defines scopes that may be granted to access token.\nRedirect URIs: A list of URIs telling Ramp where to send back the users in the authorization process.\nRedirect URI format\nNote that redirect URIs must either use https protocol or be in localhost.\n\n✅ https://example.com/callback is valid\n❎ http://example.com/callback is invalid\n✅ http://localhost:8000/callback is valid\n\n\n\n\nOAuth 2.0\nRamp API uses the OAuth 2.0 protocol ↗ to handle authorization and access control.\n\nWhich grant type should you use?\nIf you are a Ramp customer and your application only accesses your own Ramp data, then you can use either client credentials grant or authorization code grant. If your application is used by other Ramp customers, the authorization code grant is required.\n\nClient Credentials Grant\nClient Credentials ↗ grant can be used to get an access token outside of the context of a user. It is typically used by applications to directly access their own resources, not on behalf of a user.\n\nTo obtain a token, make a request to POST /developer/v1/token. You must include an Authorization header containing a base-64 representation of client_id:client_secret.\n\n\nShell\n\nJavaScript\n\nPython\n\ncurl --location --request POST 'https://api.ramp.com/developer/v1/token' \\\n    --header 'Authorization: Basic <base64-encoded client_id:client_secret>' \\\n    --header 'Content-Type: application/x-www-form-urlencoded' \\\n    --data-urlencode 'grant_type=client_credentials' \\\n    --data-urlencode 'scope=business:read transactions:read' \nThe response JSON payload contains a ready-to-use access_token. The Client Credentials Grant does not produce refresh tokens - you manually obtain new access tokens before the existing ones expire.\n\nAuthorization Code Grant\nThere are three parties involved in the Authorization Code flow -- the client (your application), the server (Ramp) and the user (data owner). The overall flow follows these steps:\n\nYour application sends the user to authenticate with Ramp.\nThe user sees the authorization prompt and approves the app's request for data access.\nThe user is redirected back via a redirect_uri with a temporary authorization_code.\nYour application exchanges the authorization_code for an access_token.\nRamp verifies the params and returns an access_token.\nYour application gets a new access_token with the refresh_token."
  },
  "start_time": 1710147532.796,
  "end_time": 1710147533.133,
  "duration": 337.009,
  "metrics": {
    "Context Relevance": 5,
    "Context Relevance_explanation": "The fetched context from the retriever performs well relative to the user's query. It directly addresses the user's question by providing information on how to build an integration using Ramp API. \n\n"
  },
  "feedback": {},
  "metadata": {},
  "user_properties": {},
  "error": null,
}

Chain Events

Chain events help with categorizing the events into different stages of the pipeline. These can be synchronous or asynchronous stages.

How Chain Events Work

Any event that has its “parent” set to a chain event becomes a step within that chain. This simple mechanism allows you to consolidate various events into a single unit, making it easier to monitor the progress of your pipeline.

Nesting for Hierarchy

You can also nest chains within each other. This hierarchical approach lets you track the execution of your pipeline in a structured and organized manner. This nesting feature can be particularly useful for complex workflows.

Session Event as a Special Case: As a special case, the “session event” for a pipeline is essentially a chain event with all other events as its children. This means you can encapsulate the entire pipeline within a single session event, making it easy to manage and analyze.

By separating events into chains, you can track properties like:

  • Chain configuration like chain name, chain settings, etc.
  • Chain metrics like chain latency, chain throughput, etc.

Here’s an example chain event:

{
  "source": "development",
  "project_id": "64d69442f9fa4485aa1cc582",
  "event_id": "52f22f37-289c-4718-bc40-0231cc5c7a99",
  "session_id": "fa78fb31-5bf9-4717-bca1-88fee7fb026b",
  "parent_id": "fa78fb31-5bf9-4717-bca1-88fee7fb026b",
  "children_ids": [
    "a809865a-8663-4201-b70b-7f4fc355175b",
    "8af7a04a-e91e-4f42-b345-29eeb614e3e1"
  ],
  "event_type": "chain",
  "event_name": "query",
  "config": {
    "name": "query_rewriter_v1",
    "description": "Rewrite the query to improve retriever performance"
  },
  "inputs": {
    "query_str": "What did the author do growing up?"
  },
  "outputs": {
    "rewritten_query": "What did Paul Graham do growing up?"
  },
  "start_time": 1710244017.942,
  "end_time": 1710244019.976,
  "duration": 2033.809,
  "metrics": {},
  "feedback": {},
  "metadata": {
	"total_tokens": 10,
	"num_events": 2,
  },
  "user_properties": {},
  "error": null,
}

Next Steps

Refer to our tracing introduction guide to get started with tracing in HoneyHive.