Skip to content

Microsoft Foundry

Coralogix's AI Observability integrations enable organizations to gain deep insight into their AI applications, helping them monitor, analyze, and optimize performance across the stack. Through integrations with the Azure AI Projects Python SDK (Microsoft Foundry), Coralogix delivers end-to-end visibility into AI workloads, supporting proactive issue detection and efficient performance tuning.

Overview

This library offers customized OpenTelemetry instrumentation for the Azure AI Projects Python SDK (Microsoft Foundry), optimized to support large language model (LLM) application development with streamlined integration, detailed production tracing, and effective debugging capabilities.

Requirements

Installation

Run the following command.

pip install "llm-tracekit-microsoft-foundry"

Authentication

Authentication data is passed during OTel Span Exporter definition:

  1. Choose the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.
  2. Use your customized API key in the authorization request header.
  3. Provide the application and subsystem names.
from llm_tracekit.microsoft_foundry import setup_export_to_coralogix

setup_export_to_coralogix(
    coralogix_token=<your_coralogix_token>,
    coralogix_endpoint="ingress.:443",
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)

Note

All of the authentication parameters can also be provided through environment variables (CX_TOKEN, CX_ENDPOINT, etc.).

Usage

This section describes how to set up instrumentation for the Microsoft Foundry SDK.

Set up tracing

Automatic

Use the setup_export_to_coralogix function to set up tracing and export traces to Coralogix. See the code snippet in the Authentication section.

Manual

Alternatively, you can set up tracing manually.

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

tracer_provider = TracerProvider(
    resource=Resource.create({SERVICE_NAME: "ai-service"}),
)
exporter = OTLPSpanExporter()
span_processor = SimpleSpanProcessor(exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)

Instrument

To instrument all clients, call the instrument method.

from llm_tracekit.microsoft_foundry import MicrosoftFoundryInstrumentor

MicrosoftFoundryInstrumentor().instrument()

Uninstrument

To uninstrument clients, call the uninstrument method.

MicrosoftFoundryInstrumentor().uninstrument()

Full example

import os
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from llm_tracekit.microsoft_foundry import MicrosoftFoundryInstrumentor, setup_export_to_coralogix

# Optional: Configure sending spans to Coralogix
# Reads Coralogix connection details from the following environment variables:
# - CX_TOKEN
# - CX_ENDPOINT
setup_export_to_coralogix(
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)

# Activate instrumentation
MicrosoftFoundryInstrumentor().instrument()

# Microsoft Foundry usage example
with AIProjectClient(
    endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
    credential=DefaultAzureCredential(),
) as project_client:
    with project_client.get_openai_client() as openai_client:
        # Using Responses API
        response = openai_client.responses.create(
            model="gpt-4o-mini",
            input="Write a short poem on open telemetry.",
        )
        print(response.output_text)

        # Using Chat Completions API
        response = openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "user", "content": "Hello, world!"},
            ],
        )
        print(response.choices[0].message.content)

Enable message content capture

By default, message content — prompt contents, completions, function arguments, and return values — is not captured. To capture message content as span attributes:

  • Pass capture_content=True when calling setup_export_to_coralogix.
  • Set the environment variable OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT to true.

Most Coralogix AI evaluations require message contents to function properly, so enabling message capture is strongly recommended.

Semantic conventions

Microsoft Foundry-specific attributes

In addition to standard GenAI semantic conventions, this instrumentation captures Foundry-specific context.
AttributeTypeDescriptionExample
gen_ai.microsoft_foundry.agent.namestringAgent name from extra_body.agent_referenceMyAgent
gen_ai.microsoft_foundry.agent.versionstringAgent version if specifiedv1
gen_ai.microsoft_foundry.conversation_idstringConversation ID if using conversationsconv_123

Standard GenAI attributes

AttributeTypeDescriptionExample
gen_ai.prompt.<message_number>.rolestringRole of message author for user message <message_number>system, user, assistant, tool
gen_ai.prompt.<message_number>.contentstringContents of user message <message_number>What's the weather in Paris?
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.idstringID of tool call in user message <message_number>call_ABC123
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.typestringType of tool call in user message <message_number>function
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.namestringThe name of the function used in tool call within user message <message_number>get_current_weather
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments passed to the function used in tool call within user message <message_number>{"location": "Seattle, WA"}
gen_ai.prompt.<message_number>.tool_call_idstringTool call ID in user message <message_number> (for tool results)call_ABC123
gen_ai.completion.<choice_number>.rolestringRole of message author for choice <choice_number> in model responseassistant
gen_ai.completion.<choice_number>.finish_reasonstringFinish reason for choice <choice_number> in model responsestop, tool_calls
gen_ai.completion.<choice_number>.contentstringContents of choice <choice_number> in model responseThe weather in Paris is rainy and overcast, with temperatures around 57°F
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.idstringID of tool call in choice <choice_number>call_ABC123
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.typestringType of tool call in choice <choice_number>function
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.namestringThe name of the function used in tool call within choice <choice_number>get_current_weather
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments passed to the function used in tool call within choice <choice_number>{"location": "Seattle, WA"}
gen_ai.request.tools.<tool_number>.typestringType of tool entry in tools listfunction
gen_ai.request.tools.<tool_number>.function.namestringThe name of the function to use in tool callsget_current_weather
gen_ai.request.tools.<tool_number>.function.descriptionstringDescription of the functionGet the current weather in a given location
gen_ai.request.tools.<tool_number>.function.parametersstringJSON describing the schema of the function parameters{"type": "object", "properties": {"location": {"type": "string"}}, "required": ["location"]}
gen_ai.request.userstringA unique identifier representing the end user[email protected]

Next steps

Once your integration is set up, explore the AI Center Overview to monitor performance, costs, quality issues, and security across all your AI applications — and to set up Guardrails for real-time policy enforcement.