Semantic Kernel is Microsoft’s open-source SDK for building AI agents and multi-agent systems. It provides a model-agnostic framework with plugins, function calling, and enterprise-ready orchestration.
HoneyHive works best with Semantic Kernel when you combine SK’s native OpenTelemetry diagnostics with a provider-specific instrumentor. SK-native spans preserve agent names and orchestration structure, while the provider instrumentor adds richer model input/output payloads.
Quick Start
Recommended setup. Enable SK’s native diagnostics before imports, initialize HoneyHive, then instrument your model provider so model events include full chat history and responses.
The examples on this page use OpenAIChatCompletion, so they use OpenAIInstrumentor. If your Semantic Kernel app uses Anthropic, Bedrock, Gemini, or another provider, use the matching provider openinference instrumentor when one exists.
To see where to initialize the tracer for your environment, including AWS Lambda and long-running servers, see Tracer Initialization.
pip install "honeyhive>=1.0.0rc0" semantic-kernel openinference-instrumentation-openai
import os
# 1. Enable SK's native OpenTelemetry (before any other imports)
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS"] = "true"
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE"] = "true"
# 2. Initialize HoneyHive
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project=os.getenv("HH_PROJECT"),
)
# 3. Layer the provider instrumentor for richer model inputs/outputs
# This example uses OpenAIChatCompletion, so it uses OpenAIInstrumentor.
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# 4. Import and use Semantic Kernel
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
Compatibility
| Requirement | Version |
|---|
| Python | 3.10+ |
| semantic-kernel | 1.27.0+ |
Example: Agent with Plugin
import asyncio
import os
from typing import Annotated
# Enable SK OTel before any imports
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS"] = "true"
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE"] = "true"
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project=os.getenv("HH_PROJECT"),
)
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.functions import kernel_function
class MathPlugin:
@kernel_function(description="Evaluate a math expression")
def calculate(
self, expression: Annotated[str, "The math expression"]
) -> str:
return str(eval(expression))
agent = ChatCompletionAgent(
service=OpenAIChatCompletion(ai_model_id="gpt-4o-mini"),
name="MathAssistant",
instructions="Use the calculate function to solve math problems.",
plugins=[MathPlugin()],
)
async def main():
response = await agent.get_response(messages="What is 25 * 4?")
print(response.content)
asyncio.run(main())
Example: Multi-Agent (Agents-as-Plugins)
Semantic Kernel supports using agents as plugins for orchestration patterns:
import asyncio
import os
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS"] = "true"
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE"] = "true"
from honeyhive import HoneyHiveTracer
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer = HoneyHiveTracer.init(
api_key=os.getenv("HH_API_KEY"),
project=os.getenv("HH_PROJECT"),
)
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
# Specialist agents
billing_agent = ChatCompletionAgent(
service=OpenAIChatCompletion(ai_model_id="gpt-4o-mini"),
name="BillingAgent",
instructions="You handle billing issues like charges and payment failures.",
)
refund_agent = ChatCompletionAgent(
service=OpenAIChatCompletion(ai_model_id="gpt-4o-mini"),
name="RefundAgent",
instructions="Assist users with refund inquiries and status updates.",
)
# Triage agent with specialists as plugins
triage_agent = ChatCompletionAgent(
service=OpenAIChatCompletion(ai_model_id="gpt-4o-mini"),
name="TriageAgent",
instructions="""Route requests to the appropriate specialist:
- Billing issues → BillingAgent
- Refund requests → RefundAgent""",
plugins=[billing_agent, refund_agent],
)
async def main():
response = await triage_agent.get_response(
messages="I was charged twice for my subscription."
)
print(response.content)
asyncio.run(main())
Troubleshooting
Traces not appearing
- Enable SK diagnostics first - Environment variables must be set before any imports:
import os
# ✅ Correct - set env vars first
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS"] = "true"
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE"] = "true"
from honeyhive import HoneyHiveTracer
tracer = HoneyHiveTracer.init(project="your-project")
from semantic_kernel.agents import ChatCompletionAgent
# ❌ Wrong - env vars set after imports
from semantic_kernel.agents import ChatCompletionAgent
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS"] = "true"
-
Check environment variables - Ensure
HH_API_KEY and HH_PROJECT are set
-
Verify OpenAI credentials - Ensure
OPENAI_API_KEY is configured
-
Instrument your provider - Use the matching provider instrumentor if you need model events with full inputs and outputs in HoneyHive. This page uses
OpenAIInstrumentor because the examples use OpenAIChatCompletion.
Missing prompts/completions in traces
Enable sensitive diagnostics and the matching provider instrumentor to capture full message content on model events. This OpenAI example uses OpenAIInstrumentor:
os.environ["SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS_SENSITIVE"] = "true"
from openinference.instrumentation.openai import OpenAIInstrumentor
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
Resources