Code examples
Copy-pasteable scripts that send OpenTelemetry GenAI spans to Coralogix AI Center using the new semantic conventions. Each example uses the OTEL standard OTLP exporter — no Coralogix-specific SDK required.
Use the domain selector at the top of this page to set your Coralogix region. The example commands and code snippets on this page update automatically to use the matching OTLP endpoint.
Example 1: OpenAI Agents SDK
Install
pip install openai-agents opentelemetry-instrumentation-openai-agents-v2 opentelemetry-instrumentation-openai-v2 opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc
Environment variables
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://ingress.:443"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <YOUR_CX_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="cx.application.name=my-genai-app,cx.subsystem.name=my-subsystem"
export OTEL_SEMCONV_STABILITY_OPT_IN="gen_ai_latest_experimental"
export OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT="true"
Script
import asyncio
from agents import Agent, Runner, function_tool
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.openai_agents import OpenAIAgentsInstrumentor
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_otel() -> TracerProvider:
resource = Resource.create()
provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter()
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
return provider
@function_tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is 22C and sunny."
async def main():
provider = configure_otel()
OpenAIAgentsInstrumentor().instrument(tracer_provider=provider)
OpenAIInstrumentor().instrument(tracer_provider=provider)
agent = Agent(
name="Travel Assistant",
instructions="You are a concise travel assistant. Answer in one sentence.",
tools=[get_weather],
)
result = await Runner.run(agent, input="What's the weather in Paris?")
print(f"Agent response: {result.final_output}")
provider.force_flush()
provider.shutdown()
if __name__ == "__main__":
asyncio.run(main())
Expected span attributes
gen_ai.operation.name="chat"gen_ai.system="openai"gen_ai.request.model,gen_ai.response.modelgen_ai.usage.input_tokens,gen_ai.usage.output_tokensgen_ai.input.messages,gen_ai.output.messages(as log events with content capture)- Agent spans:
gen_ai.agent.name,gen_ai.tool.name
Tip
If pip install fails for pre-release packages, try pip install --pre opentelemetry-instrumentation-openai-agents-v2.
Example 2: Anthropic / Claude SDK
Install
pip install anthropic opentelemetry-instrumentation-anthropic opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc
Environment variables
export ANTHROPIC_API_KEY="<YOUR_ANTHROPIC_API_KEY>"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://ingress.:443"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <YOUR_CX_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="cx.application.name=my-genai-app,cx.subsystem.name=my-service"
export OTEL_SEMCONV_STABILITY_OPT_IN="gen_ai_latest_experimental"
export TRACELOOP_TRACE_CONTENT="true"
Script
import anthropic
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_otel() -> TracerProvider:
resource = Resource.create()
provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter()
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
return provider
def main():
provider = configure_otel()
AnthropicInstrumentor().instrument(tracer_provider=provider)
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=256,
messages=[
{"role": "user", "content": "What is OpenTelemetry in one sentence?"}
],
)
print(f"Claude response: {message.content[0].text}")
provider.force_flush()
provider.shutdown()
if __name__ == "__main__":
main()
Expected span attributes
gen_ai.system="anthropic"gen_ai.request.model="claude-sonnet-4-20250514"gen_ai.usage.input_tokens,gen_ai.usage.output_tokensgen_ai.response.finish_reasons=["end_turn"]
Warning
opentelemetry-instrumentation-anthropic on PyPI is from OpenLLMetry (Traceloop), not official OTel contrib. Semconv migration from legacy to new is in progress. Content capture uses TRACELOOP_TRACE_CONTENT=true (not the standard OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT).
Example 3: AWS Bedrock
Install
pip install boto3 opentelemetry-instrumentation-botocore opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc
Environment variables
export AWS_ACCESS_KEY_ID="<YOUR_AWS_ACCESS_KEY>"
export AWS_SECRET_ACCESS_KEY="<YOUR_AWS_SECRET_KEY>"
export AWS_DEFAULT_REGION="us-east-1"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://ingress.:443"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <YOUR_CX_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="cx.application.name=my-genai-app,cx.subsystem.name=my-service"
export OTEL_SEMCONV_STABILITY_OPT_IN="gen_ai_latest_experimental"
export OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT="true"
Script
import boto3
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.botocore import BotocoreInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_otel() -> TracerProvider:
resource = Resource.create()
provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter()
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
return provider
def main():
provider = configure_otel()
BotocoreInstrumentor().instrument(tracer_provider=provider)
client = boto3.client("bedrock-runtime", region_name="us-east-1")
response = client.converse(
modelId="anthropic.claude-3-haiku-20240307-v1:0",
messages=[
{
"role": "user",
"content": [{"text": "What is OpenTelemetry in one sentence?"}],
}
],
inferenceConfig={"maxTokens": 256, "temperature": 0.7},
)
text = response["output"]["message"]["content"][0]["text"]
usage = response["usage"]
print(f"Bedrock response: {text}")
print(f"Tokens - input: {usage['inputTokens']}, output: {usage['outputTokens']}")
provider.force_flush()
provider.shutdown()
if __name__ == "__main__":
main()
Expected span attributes
gen_ai.system/gen_ai.provider.name="aws.bedrock"gen_ai.request.model="anthropic.claude-3-haiku-20240307-v1:0"gen_ai.operation.name="chat"gen_ai.usage.input_tokens,gen_ai.usage.output_tokensgen_ai.request.max_tokens=256,gen_ai.request.temperature=0.7- AWS-specific:
rpc.system="aws-api",rpc.service="BedrockRuntime"
Tip
Use the Converse API (not InvokeModel) — it has full tracing support in the botocore instrumentation. The model must be enabled in your AWS account for the chosen region.
Example 4: OpenLLMetry (Traceloop SDK) — Python
The most popular community instrumentation. Auto-instruments OpenAI calls with zero code changes.
Warning
OpenLLMetry currently uses legacy semconv (gen_ai.prompt / gen_ai.completion indexed attributes). Migration to new is tracked in issue #3515. Coralogix AI Center supports both.
Install
Environment variables
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
export TRACELOOP_BASE_URL="https://ingress.:443/v1/traces"
export TRACELOOP_HEADERS="Authorization=Bearer <YOUR_CX_API_KEY>"
export TRACELOOP_TRACE_CONTENT="true"
export OTEL_RESOURCE_ATTRIBUTES="cx.application.name=my-genai-app,cx.subsystem.name=my-service"
Script
from openai import OpenAI
from traceloop.sdk import Traceloop
Traceloop.init() # reads TRACELOOP_BASE_URL, TRACELOOP_HEADERS from env
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "What is OpenTelemetry in one sentence?"},
],
max_tokens=100,
)
print(response.choices[0].message.content)
Expected span attributes (legacy semconv)
gen_ai.system="openai"gen_ai.request.model="gpt-4o-mini"gen_ai.usage.prompt_tokens,gen_ai.usage.completion_tokensgen_ai.prompt.0.role,gen_ai.prompt.0.content,gen_ai.prompt.1.role,gen_ai.prompt.1.contentgen_ai.completion.0.role,gen_ai.completion.0.content
Example 5: Java — manual instrumentation
No auto-instrumentation exists for OpenAI in Java. Manual spans with new semconv.
Dependencies (pom.xml)
<dependencies>
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java</artifactId>
<version>4.35.0</version>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-sdk</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-bom</artifactId>
<version>1.59.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Environment variables
export OPENAI_API_KEY="sk-..."
export CX_OTEL_ENDPOINT="https://ingress.:443"
export CX_API_KEY="<YOUR_CX_API_KEY>"
Script
package com.example;
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.models.*;
import io.opentelemetry.api.common.*;
import io.opentelemetry.api.trace.*;
import io.opentelemetry.context.Scope;
import io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
public class GenAiManualInstrumentation {
public static void main(String[] args) {
var cxEndpoint = System.getenv().getOrDefault("CX_OTEL_ENDPOINT",
"https://ingress.:443");
var cxApiKey = System.getenv().getOrDefault("CX_API_KEY", "");
var exporter = OtlpGrpcSpanExporter.builder()
.setEndpoint(cxEndpoint)
.addHeader("Authorization", "Bearer " + cxApiKey).build();
var resource = Resource.getDefault().merge(Resource.create(
Attributes.builder()
.put("service.name", "java-genai-demo")
.put("cx.application.name", "my-genai-app")
.put("cx.subsystem.name", "my-service").build()));
var tracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(exporter).build())
.setResource(resource).build();
var otel = OpenTelemetrySdk.builder().setTracerProvider(tracerProvider).build();
var tracer = otel.getTracer("genai-manual");
var openAi = OpenAIOkHttpClient.fromEnv();
var model = "gpt-4o-mini";
var systemMsg = "You are a concise assistant.";
var userMsg = "Explain distributed tracing in one sentence.";
var inputJson = "[" +
"{\"role\":\"system\",\"parts\":[{\"type\":\"text\",\"content\":\"" + systemMsg + "\"}]}," +
"{\"role\":\"user\",\"parts\":[{\"type\":\"text\",\"content\":\"" + userMsg + "\"}]}]";
Span span = tracer.spanBuilder("chat " + model)
.setSpanKind(SpanKind.CLIENT)
.setAttribute("gen_ai.operation.name", "chat")
.setAttribute("gen_ai.system", "openai")
.setAttribute("gen_ai.request.model", model)
.setAttribute("gen_ai.input.messages", inputJson)
.startSpan();
try (Scope ignored = span.makeCurrent()) {
var params = ChatCompletionCreateParams.builder()
.model(ChatModel.GPT_4O_MINI)
.maxCompletionTokens(100)
.addSystemMessage(systemMsg)
.addUserMessage(userMsg).build();
var completion = openAi.chat().completions().create(params);
var text = completion.choices().get(0).message().content().orElse("");
span.setAttribute("gen_ai.response.model", completion.model());
span.setAttribute("gen_ai.response.id", completion.id());
span.setAttribute("gen_ai.usage.input_tokens",
completion.usage().map(u -> u.promptTokens()).orElse(0L));
span.setAttribute("gen_ai.usage.output_tokens",
completion.usage().map(u -> u.completionTokens()).orElse(0L));
span.setAttribute("gen_ai.output.messages",
"[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"" +
text.replace("\"", "\\\"") + "\"}]}]");
System.out.println("Response: " + text);
} catch (Exception e) {
span.setStatus(StatusCode.ERROR, e.getMessage());
span.recordException(e);
} finally {
span.end();
}
tracerProvider.forceFlush();
tracerProvider.shutdown();
}
}
Example 6: .NET — Microsoft.Extensions.AI
Microsoft.Extensions.AI natively implements new semconv v1.37. No opt-in needed.
Install
dotnet new console -n OtelGenAiDotnet && cd OtelGenAiDotnet
dotnet add package Microsoft.Extensions.AI.OpenAI --version 10.5.1
dotnet add package Microsoft.Extensions.AI --version 10.5.2
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol --version 1.15.3
dotnet add package OpenTelemetry.Extensions.Hosting --version 1.15.3
Environment variables
export OPENAI_API_KEY="sk-..."
export CX_OTEL_ENDPOINT="https://ingress.:443"
export CX_API_KEY="<YOUR_CX_API_KEY>"
Script (Program.cs)
using Microsoft.Extensions.AI;
using OpenAI;
using OpenTelemetry;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
var cxEndpoint = Environment.GetEnvironmentVariable("CX_OTEL_ENDPOINT")
?? "https://ingress.:443";
var cxApiKey = Environment.GetEnvironmentVariable("CX_API_KEY") ?? "";
var openAiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY")
?? throw new InvalidOperationException("OPENAI_API_KEY required");
const string sourceName = "GenAI.Demo";
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(ResourceBuilder.CreateDefault()
.AddService("dotnet-genai-demo")
.AddAttributes(new Dictionary<string, object> {
["cx.application.name"] = "my-genai-app",
["cx.subsystem.name"] = "my-service",
}))
.AddSource(sourceName)
.AddOtlpExporter(opts => {
opts.Endpoint = new Uri(cxEndpoint);
opts.Headers = $"Authorization=Bearer {cxApiKey}";
opts.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.Grpc;
})
.Build();
IChatClient chatClient = new ChatClientBuilder(
new OpenAIClient(openAiKey).GetChatClient("gpt-4o-mini").AsIChatClient())
.UseOpenTelemetry(sourceName: sourceName,
configure: c => c.EnableSensitiveData = true)
.Build();
var response = await chatClient.GetResponseAsync(
new List<ChatMessage> {
new(ChatRole.System, "You are a concise assistant."),
new(ChatRole.User, "What is OpenTelemetry in one sentence?"),
},
new ChatOptions { MaxOutputTokens = 100, Temperature = 0.7f });
Console.WriteLine($"Response: {response.Text}");
Console.WriteLine($"Tokens: {response.Usage?.InputTokenCount} in, {response.Usage?.OutputTokenCount} out");
Tip
EnableSensitiveData = true is required for message content. The IChatClient abstraction works with any provider (Azure OpenAI, Ollama, etc.).
Example 7: Manual instrumentation (no third-party library)
Pure manual spans with no instrumentation library — just OTel SDK + raw HTTP. Universal template for any provider/language.
Install
Environment variables
export OPENAI_API_KEY="sk-..."
export CX_OTEL_ENDPOINT="https://ingress.:443"
export CX_API_KEY="<YOUR_CX_API_KEY>"
Script
import json, os
import httpx
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import SpanKind, StatusCode
CX_ENDPOINT = os.environ.get("CX_OTEL_ENDPOINT", "https://ingress.:443")
CX_API_KEY = os.environ.get("CX_API_KEY", "")
OPENAI_KEY = os.environ.get("OPENAI_API_KEY", "")
resource = Resource.create({"service.name": "manual-genai-demo",
"cx.application.name": "my-genai-app", "cx.subsystem.name": "my-service"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(
OTLPSpanExporter(endpoint=CX_ENDPOINT,
headers=[("Authorization", f"Bearer {CX_API_KEY}")])))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("genai-manual", "1.0.0")
def to_semconv(messages):
"""Convert OpenAI messages to new semconv JSON parts format."""
result = []
for m in messages:
parts = []
if m.get("content"): parts.append({"type": "text", "content": m["content"]})
if m.get("tool_calls"):
for tc in m["tool_calls"]:
parts.append({"type": "tool_call", "id": tc["id"],
"name": tc["function"]["name"], "arguments": tc["function"]["arguments"]})
entry = {"role": m["role"], "parts": parts}
if m.get("tool_call_id"): entry["tool_call_id"] = m["tool_call_id"]
result.append(entry)
return json.dumps(result)
def chat(messages, model="gpt-4o-mini", max_tokens=200, user=None):
with tracer.start_as_current_span(f"chat {model}", kind=SpanKind.CLIENT,
attributes={"gen_ai.operation.name": "chat", "gen_ai.system": "openai",
"gen_ai.request.model": model, "gen_ai.request.max_tokens": max_tokens,
"gen_ai.input.messages": to_semconv(messages),
"server.address": "api.openai.com", "server.port": 443}) as span:
if user: span.set_attribute("gen_ai.request.user", user)
body = {"model": model, "messages": messages, "max_tokens": max_tokens}
if user: body["user"] = user
resp = httpx.post("https://api.openai.com/v1/chat/completions",
headers={"Authorization": f"Bearer {OPENAI_KEY}",
"Content-Type": "application/json"}, json=body, timeout=60.0)
resp.raise_for_status()
data = resp.json()
usage = data.get("usage", {})
choices = data.get("choices", [])
span.set_attribute("gen_ai.response.model", data.get("model", model))
span.set_attribute("gen_ai.response.id", data.get("id", ""))
span.set_attribute("gen_ai.response.finish_reasons",
json.dumps([c.get("finish_reason") for c in choices]))
span.set_attribute("gen_ai.usage.input_tokens", usage.get("prompt_tokens", 0))
span.set_attribute("gen_ai.usage.output_tokens", usage.get("completion_tokens", 0))
out = []
for c in choices:
m = c.get("message", {})
parts = []
if m.get("content"): parts.append({"type": "text", "content": m["content"]})
if m.get("tool_calls"):
for tc in m["tool_calls"]:
parts.append({"type": "tool_call", "id": tc["id"],
"name": tc["function"]["name"], "arguments": tc["function"]["arguments"]})
out.append({"role": m.get("role", "assistant"), "parts": parts})
span.set_attribute("gen_ai.output.messages", json.dumps(out))
return data
result = chat(
messages=[{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "What is OpenTelemetry in one sentence?"}],
user="user-42")
print(f"Response: {result['choices'][0]['message']['content']}")
provider.force_flush()
provider.shutdown()
Universal template
To adapt for Anthropic, Gemini, or any other provider, change the HTTP endpoint, request body format, and response parsing. The OTel span attributes remain identical.
Example 8: Go — manual instrumentation
No auto-instrumentation for GenAI in Go. Manual spans with new semconv + raw net/http to OpenAI.
go.mod
module go-genai-manual-demo
go 1.22
require (
go.opentelemetry.io/otel v1.34.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0
go.opentelemetry.io/otel/sdk v1.34.0
go.opentelemetry.io/otel/trace v1.34.0
google.golang.org/grpc v1.70.0
)
Environment variables
export OPENAI_API_KEY="sk-..."
export CX_OTEL_ENDPOINT="ingress.:443"
export CX_API_KEY="<YOUR_CX_API_KEY>"
main.go
package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"go.opentelemetry.io/otel/trace"
"google.golang.org/grpc/credentials"
)
type chatMessage struct {
Role string `json:"role"`
Content string `json:"content"`
}
type chatRequest struct {
Model string `json:"model"`
Messages []chatMessage `json:"messages"`
User string `json:"user,omitempty"`
}
type chatChoice struct {
Message chatMessage `json:"message"`
FinishReason string `json:"finish_reason"`
}
type chatUsage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
}
type chatResponse struct {
ID string `json:"id"`
Model string `json:"model"`
Choices []chatChoice `json:"choices"`
Usage chatUsage `json:"usage"`
}
type msgPart struct {
Type string `json:"type"`
Content string `json:"content"`
}
type semconvMsg struct {
Role string `json:"role"`
Parts []msgPart `json:"parts"`
}
func initTracer(ctx context.Context) (*sdktrace.TracerProvider, error) {
endpoint := os.Getenv("CX_OTEL_ENDPOINT")
if endpoint == "" {
endpoint = "ingress.coralogix.com:443"
}
exporter, err := otlptracegrpc.New(ctx,
otlptracegrpc.WithEndpoint(endpoint),
otlptracegrpc.WithTLSCredentials(credentials.NewClientTLSFromCert(nil, "")),
otlptracegrpc.WithHeaders(map[string]string{
"Authorization": "Bearer " + os.Getenv("CX_API_KEY"),
}),
)
if err != nil {
return nil, err
}
res, _ := resource.Merge(resource.Default(),
resource.NewWithAttributes(semconv.SchemaURL,
semconv.ServiceName("go-genai-manual-demo")))
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter), sdktrace.WithResource(res))
otel.SetTracerProvider(tp)
return tp, nil
}
func chatCompletion(ctx context.Context, tracer trace.Tracer) (*chatResponse, error) {
model := "gpt-4o-mini"
user := "user-42"
msgs := []chatMessage{
{Role: "system", Content: "You are a concise assistant."},
{Role: "user", Content: "What is OpenTelemetry?"},
}
inputMsgs := make([]semconvMsg, len(msgs))
for i, m := range msgs {
inputMsgs[i] = semconvMsg{Role: m.Role,
Parts: []msgPart{{Type: "text", Content: m.Content}}}
}
inputJSON, _ := json.Marshal(inputMsgs)
ctx, span := tracer.Start(ctx, "chat "+model,
trace.WithSpanKind(trace.SpanKindClient),
trace.WithAttributes(
attribute.String("gen_ai.operation.name", "chat"),
attribute.String("gen_ai.system", "openai"),
attribute.String("gen_ai.provider.name", "openai"),
attribute.String("gen_ai.request.model", model),
attribute.String("gen_ai.request.user", user),
attribute.String("gen_ai.input.messages", string(inputJSON)),
attribute.String("server.address", "api.openai.com"),
attribute.Int("server.port", 443),
))
defer span.End()
body, _ := json.Marshal(chatRequest{Model: model, Messages: msgs, User: user})
req, _ := http.NewRequestWithContext(ctx, "POST",
"https://api.openai.com/v1/chat/completions", bytes.NewReader(body))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+os.Getenv("OPENAI_API_KEY"))
resp, err := http.DefaultClient.Do(req)
if err != nil {
span.RecordError(err)
return nil, err
}
defer resp.Body.Close()
respBytes, _ := io.ReadAll(resp.Body)
var chatResp chatResponse
json.Unmarshal(respBytes, &chatResp)
finishReasons := make([]string, len(chatResp.Choices))
outMsgs := make([]semconvMsg, len(chatResp.Choices))
for i, c := range chatResp.Choices {
finishReasons[i] = c.FinishReason
outMsgs[i] = semconvMsg{Role: c.Message.Role,
Parts: []msgPart{{Type: "text", Content: c.Message.Content}}}
}
outJSON, _ := json.Marshal(outMsgs)
span.SetAttributes(
attribute.String("gen_ai.response.id", chatResp.ID),
attribute.String("gen_ai.response.model", chatResp.Model),
attribute.StringSlice("gen_ai.response.finish_reasons", finishReasons),
attribute.Int("gen_ai.usage.input_tokens", chatResp.Usage.PromptTokens),
attribute.Int("gen_ai.usage.output_tokens", chatResp.Usage.CompletionTokens),
attribute.String("gen_ai.output.messages", string(outJSON)))
return &chatResp, nil
}
func main() {
ctx := context.Background()
tp, err := initTracer(ctx)
if err != nil {
log.Fatal(err)
}
defer func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
tp.Shutdown(ctx)
}()
resp, err := chatCompletion(ctx, otel.Tracer("genai-manual-demo"))
if err != nil {
log.Fatal(err)
}
for _, c := range resp.Choices {
fmt.Printf("%s: %s\n", c.FinishReason, c.Message.Content)
}
fmt.Printf("Tokens: %d in, %d out\n", resp.Usage.PromptTokens, resp.Usage.CompletionTokens)
}
Build and run
Tip
The Go OTel SDK does not export gen_ai.* constants yet — all keys are raw strings. Both gen_ai.system (deprecated) and gen_ai.provider.name are set for backward compatibility.
Comparison
| # | Example | Language | Instrumentation | Semconv |
|---|---|---|---|---|
| 1 | OpenAI Agents SDK | Python | OTel contrib (auto) | New (opt-in) |
| 2 | Anthropic Claude | Python | OpenLLMetry (auto) | Legacy (migrating) |
| 3 | AWS Bedrock | Python | OTel contrib (auto) | New (opt-in) |
| 4 | OpenLLMetry + OpenAI | Python | Traceloop SDK (auto) | Legacy |
| 5 | Java manual | Java | Manual spans | New |
| 6 | .NET Microsoft.Extensions.AI | C# | Microsoft.Extensions.AI (auto) | New (native) |
| 7 | Manual (no third-party) | Python | Manual spans + raw HTTP | New |
| 8 | Go manual | Go | Manual spans + net/http | New |
Next steps
Look up which open-source library to use for your provider in Provider compatibility.