Ollama is a fast, open-source, and lightweight model server for running large language models (LLMs) on commodity hardware.

With HoneyHive, you can trace all your operations using a single line of code. Find a list of all supported integrations here.

HoneyHive Setup

Follow the HoneyHive Installation Guide to get your API key and initialize the tracer.

Ollama Setup

Go to Ollama Quickstart to get your Ollama model up and running locally using ollama run llama3.2:1b for example.

Note: please use version ollama==0.2.0 for Python.

Example

Here is an example of how to trace your code in HoneyHive.

View your Traces

Once you run your code, you can view your execution trace in the HoneyHive UI by clicking the Data Store tab on the left sidebar.