Learn how to integrate NVIDIA NeMo Models with HoneyHive
NVIDIA NeMo offers a suite of leading-edge NVIDIA-built and open-source generative AI models, meticulously fine-tuned for exceptional performance and efficiency. With the ability to deploy these models using NVIDIA NIM™ microservices and customize them through NeMo, developers can swiftly prototype and scale their AI applications.With HoneyHive, you can trace all your operations using a single line of code. Find a list of all supported integrations here.
Here is an example of how to trace your code in HoneyHive.
Copy
Ask AI
# NVIDIA uses OpenAI client to interact with their APIfrom openai import OpenAI from honeyhive import HoneyHiveTracer# place the code below at the beginning of your application executionHoneyHiveTracer.init( api_key="MY_HONEYHIVE_API_KEY", # paste your API key here project="MY_HONEYHIVE_PROJECT_NAME", # paste your project name here)client = OpenAI( base_url="https://integrate.api.nvidia.com/v1", api_key="MY_NVIDIA_API_KEY",)completion = client.chat.completions.create( model="nvidia/mistral-nemo-minitron-8b-8k-instruct", messages=[ { "role": "user", "content": "Write a limerick about the wonders of GPU computing.", } ], stream=True,)for chunk in completion: if chunk.choices[0].delta.content is not None: print(chunk.choices[0].delta.content, end="")