NVIDIA NeMo offers a suite of leading-edge NVIDIA-built and open-source generative AI models, meticulously fine-tuned for exceptional performance and efficiency. With the ability to deploy these models using NVIDIA NIM™ microservices and customize them through NeMo, developers can swiftly prototype and scale their AI applications.

With HoneyHive, you can trace all your operations using a single line of code. Find a list of all supported integrations here.

HoneyHive Setup

Follow the HoneyHive Installation Guide to get your API key and initialize the tracer.

NeMo Setup

Go to the NeMo Playground to get your NVIDIA API key.

Example

Here is an example of how to trace your code in HoneyHive.


# NVIDIA uses OpenAI client to interact with their API
from openai import OpenAI  
from honeyhive import HoneyHiveTracer

# place the code below at the beginning of your application execution
HoneyHiveTracer.init(
    api_key="MY_HONEYHIVE_API_KEY", # paste your API key here
    project="MY_HONEYHIVE_PROJECT_NAME", # paste your project name here
)

client = OpenAI(
    base_url="https://integrate.api.nvidia.com/v1",
    api_key="MY_NVIDIA_API_KEY",
)

completion = client.chat.completions.create(
    model="nvidia/mistral-nemo-minitron-8b-8k-instruct",
    messages=[
        {
            "role": "user",
            "content": "Write a limerick about the wonders of GPU computing.",
        }
    ],
    stream=True,
)

for chunk in completion:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")

View your Traces

Once you run your code, you can view your execution trace in the HoneyHive UI by clicking the Data Store tab on the left sidebar.