AI Explorer
An AI span is a span that captures an AI-related operation, such as calling an LLM model or invoking the Guardrails SDK. In AI Explorer, you can view span-level data for every LLM interaction in a selected application, including security and quality evaluations, guardrail actions, latency, cost, token usage, tool invocations, and trace correlation. This helps you identify issues quickly and investigate them in context.
Use AI Explorer to:
- Troubleshoot specific interactions by reviewing prompts and responses, detected evaluations, guardrail actions, tool usage, and supporting metadata.
- Investigate performance and trace the full request flow end-to-end by viewing the related trace in Explore Traces.
- Reduce risk and improve quality by focusing on high-severity evaluation results and guardrail actions, then validating outcomes in span and trace context.
Access AI Explorer
- In the Coralogix UI, go to AI Center, then Application Catalog.
- Select an application.
- Select AI Explorer.
Views in AI Explorer
AI Explorer provides two complementary views: AI Spans and AI Interactions.
AI Spans view
The AI Spans view shows individual span-level data. Each row in the table represents a single LLM span within a trace. Use this view to move from an app-level symptom to the exact interaction and trace that explains it.
AI Interactions view
The AI Interactions view groups data by trace ID, similar to how Explore Traces shows both spans and traces views. Each row in the AI Interactions table represents a single AI interaction — a trace that contains one or more AI spans.
The table columns are the same as in the AI Spans view, with one additional column:
| Column | Description |
|---|---|
| AI Spans | The number of AI spans included in that interaction (trace) |
Selecting an interaction row opens a drill-down view that shows the same fields (evaluations, guardrail actions, tokens, cost, and so on) aggregated at the interaction level rather than the span level.
Review span attributes in the table
The AI Spans table provides a high-level view of LLM spans so you can scan, sort, and filter for outliers.
| Attribute | Description |
|---|---|
| Timestamp | The exact time the span occurred |
| Errors | Whether the span includes errors |
| Input | The user input for the span |
| Output | The LLM output for the span |
| Tokens | Token consumption during the span |
| Cost | Estimated span cost in USD |
| User ID | The user identifier associated with the span |
| Security issues | High-score security evaluations detected in the span |
| Quality issues | High-score quality evaluations detected in the span |
| Duration | Span duration |
| Guardrail action | The Guardrails action taken for the span, such as Blocked |
The AI Explorer Interactions view lists every LLM span with inline security and quality issue labels, guardrail action badges (such as Block), token counts, and user identifiers, making it straightforward to identify problematic interactions at a glance.
Inspect a span in detail
Select a span row to open the span details panel. This view provides the full context of that LLM span and shows how it fits into the trace that processed the request.
| Section in the span details panel | Description |
|---|---|
| Span ID | The span identifier and duration |
| Conversation segments | System prompt, user prompt and response, detected evaluations including occurrences and scores, tool call details, tool response, and final AI response |
| Metadata | Trace ID, models used, timestamp, estimated cost, token usage, and other span attributes |
| Activated policies | Policies that evaluated or guarded the span, including evaluation scores and guardrail actions |
| Tools | Tools invoked in the span, including tool names and parameters |
The span details panel shows the full conversation alongside metadata and an Activated Policies panel that lists every evaluation result and guardrail action for that span — including individual scores and the action taken for each policy.
Investigate Guardrails actions
When a span includes a guardrail action, the span details panel shows the following information:
- The detected issue type, such as PII.
- The action taken, such as Blocked.
Use this view to confirm why a span was blocked and to validate that the correct policy triggered.
Understand evaluation results in span context
When you open a span, the span details show the evaluations that ran for that span and what they returned. Each evaluation includes the following information:
- The evaluation name, such as restricted topics, toxicity, or prompt injection.
- The score for the prompt or the response.
- A severity label.
| Label | Description |
|---|---|
| High | The score crossed the configured threshold. This is considered an issue. These results surface in AI Center dashboards. |
| Low | The score did not cross the high-severity threshold. This is considered no issue, or low severity. |
Explore the full trace for end-to-end context
Each AI span belongs to a trace. Use trace exploration when you need full end-to-end context across services, tools, and downstream components.
To explore the full trace:
- In the span details panel, select Explore span to open the related trace in Explore Traces.
- Switch between visualizations, such as Gantt and flame graph views, to understand timing, dependencies, and bottlenecks.
- In the trace drilldown, review LLM attributes such as prompts and responses as span tags.
Opening a span in Explore Traces reveals the full end-to-end trace, including guardrails spans such as cx.guardrails.session, guardrails.prompt, and guardrails.response, letting you see exactly where guardrail checks ran relative to the LLM call and other downstream services.
To investigate further using traces and spans:
- Use traces to identify abnormal requests. In the trace view, compare executions by time range, service, action, or flow to spot errors and latency outliers.
- Use spans to find the operation that explains the behavior. In the span view, compare operations across traces using RED signals and span attributes to determine whether the issue is isolated or widespread.
- Open the drilldown view to inspect details. Review timing, structure, dependencies, and related context in more detail. In the Info Panel, review LLM attributes such as prompts and responses as span tags.


