Strands Agents
Monitor applications built with the Strands Agents SDK using Coralogix AI Observability. The Strands integration enriches the built-in OpenTelemetry tracing provided by the SDK with GenAI semantic convention attributes, so you can track prompt content, completions, tool calls, and user identity across every agent run.
How Strands instrumentation works
The Strands Agents SDK ships with built-in OpenTelemetry tracing. The llm-tracekit-strands package enriches those existing spans with additional GenAI semantic convention attributes, covering prompt and completion content, tool call metadata, finish reasons, and end-user identity. You don't need to create spans manually.
What you need
- Python 3.10–3.13.
- Coralogix API keys.
strands-agents0.1.0 or newer.
Installation
Install the package:
Authentication
Configure OTLP credentials before enabling instrumentation to export spans to Coralogix. Use setup_export_to_coralogix or the corresponding environment variables.
Using setup_export_to_coralogix
from llm_tracekit.strands import setup_export_to_coralogix
setup_export_to_coralogix(
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
Manual OTel setup
Configure a tracer provider directly using the standard OpenTelemetry SDK:
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
tracer_provider = TracerProvider(
resource=Resource.create({SERVICE_NAME: "ai-service"}),
)
exporter = OTLPSpanExporter()
span_processor = SimpleSpanProcessor(exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)
Using environment variables
If arguments are not passed to setup_export_to_coralogix, the helper reads the following environment variables:
CX_TOKEN: Your Coralogix API key.CX_ENDPOINT: Select the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.CX_APPLICATION_NAME: Your application's name.CX_SUBSYSTEM_NAME: Your subsystem's name.
Set up tracing
Instrument
Create an instance of StrandsInstrumentor and call instrument before running your agent.
Uninstrument
To remove instrumentation, call the uninstrument method.
Full example
from llm_tracekit.strands import StrandsInstrumentor, setup_export_to_coralogix
from strands import Agent
# Optional: configure sending spans to Coralogix
# Reads connection details from environment variables: CX_TOKEN, CX_ENDPOINT
setup_export_to_coralogix(
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
# Activate instrumentation
StrandsInstrumentor().instrument()
# Example Strands usage
agent = Agent(system_prompt="You are a helpful assistant.")
response = agent("Write a short poem on open telemetry.")
Enable message content capture
By default, message content—such as prompts, completions, tool call arguments, and tool responses—is not captured.
To capture message content as span attributes, do one of the following:
- Pass
capture_content=Truewhen callingsetup_export_to_coralogix. - Set the environment variable
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENTtotrue.
Coralogix recommends enabling message content capture. Many AI evaluations will not work without it.
User identification
To associate traces with a specific end user, pass the user parameter in your model's params configuration:
from strands.models.openai import OpenAIModel
from strands import Agent
model = OpenAIModel(
model_id="gpt-4o",
params={"user": "[email protected]"}
)
agent = Agent(model=model)
This sets the gen_ai.request.user span attribute on every request made by that agent.
Validate the integration
After running an instrumented agent, open AI Center in Coralogix. Agent runs appear as spans in the LLM Calls view. Confirm that:
- Spans are associated with the correct application and subsystem names.
- Prompt and completion content appears if message content capture is enabled.
- Tool calls, finish reasons, and user identifiers populate as expected.
Semantic conventions
| Attribute | Type | Description | Example |
|---|---|---|---|
gen_ai.prompt.<message_number>.role | string | Role of the author for each input message | system, user, assistant, tool |
gen_ai.prompt.<message_number>.content | string | Contents of the input message (captured when content capture is enabled) | What's the weather in Paris? |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.id | string | ID of a tool call issued from the prompt | call_O8NOz8VlxosSASEsOY7LDUcP |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.type | string | Type of tool call issued from the prompt | function |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.name | string | Function name used in the prompt's tool call | get_current_weather |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments passed to the prompt's tool call | {"location": "Seattle, WA"} |
gen_ai.prompt.<message_number>.tool_call_id | string | ID of the tool call result returned in a tool message | call_mszuSIzqtI65i1wAUOE8w5H4 |
gen_ai.completion.<choice_number>.role | string | Role of the author for each returned choice | assistant |
gen_ai.completion.<choice_number>.finish_reason | string | Finish reason reported for the choice | stop, tool_calls, error |
gen_ai.completion.<choice_number>.content | string | Text returned by the model (captured when content capture is enabled) | The weather in Paris is rainy and overcast... |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.id | string | ID of a tool call triggered by the model | call_O8NOz8VlxosSASEsOY7LDUcP |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.type | string | Type of tool call triggered by the model | function |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.name | string | Function name executed by the model's tool call | get_current_weather |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments supplied to the model's tool call | {"location": "Seattle, WA"} |
gen_ai.request.tools.<tool_number>.type | string | Declared type for each available tool definition exposed to the model | function |
gen_ai.request.tools.<tool_number>.function.name | string | Tool name surfaced in the request-level invocation params | get_current_weather |
gen_ai.request.tools.<tool_number>.function.description | string | Tool description shown to the model | Get the current weather in a given location |
gen_ai.request.tools.<tool_number>.function.parameters | string | JSON-serialized parameter schema | {"type":"object","properties":{"location":{"type":"string"}},"required":["location"]} |
gen_ai.request.user | string | A unique identifier representing the end-user | [email protected] |
Next steps
- AI Center Overview — monitor performance, costs, quality issues, and security across all AI applications
- Evaluate — set up evaluation policies to assess prompt and response quality
- LangChain integration — add LangChain tracing alongside Strands