Ollama is a fast, open-source, and lightweight model server for running large language models (LLMs) on commodity hardware.With HoneyHive, you can trace all your operations using a single line of code. Find a list of all supported integrations here.
Go to Ollama Quickstart to get your Ollama model up and running locally using ollama run llama3.2:1b for example.Note: please use version ollama==0.2.0 for Python.