Skip to content

AI Explorer

An AI span is a span that captures an AI-related operation, such as calling an LLM model or invoking the Guardrails SDK. In AI Explorer, you can view span-level data for every LLM interaction in a selected application, including security and quality evaluations, guardrail actions, latency, cost, token usage, tool invocations, and trace correlation. This helps you identify issues quickly and investigate them in context.

Use AI Explorer to:

  • Troubleshoot specific interactions by reviewing prompts and responses, detected evaluations, guardrail actions, tool usage, and supporting metadata.
  • Investigate performance and trace the full request flow end-to-end by viewing the related trace in Explore Traces.
  • Reduce risk and improve quality by focusing on high-severity evaluation results and guardrail actions, then validating outcomes in span and trace context.

Access AI Explorer

  1. In the Coralogix UI, go to AI Center, then Application Catalog.
  2. Select an application.
  3. Select AI Explorer.

Views in AI Explorer

AI Explorer provides two complementary views: AI Spans and AI Interactions.

AI Spans view

The AI Spans view shows individual span-level data. Each row in the table represents a single LLM span within a trace. Use this view to move from an app-level symptom to the exact interaction and trace that explains it.

AI Interactions view

The AI Interactions view groups data by trace ID, similar to how Explore Traces shows both spans and traces views. Each row in the AI Interactions table represents a single AI interaction — a trace that contains one or more AI spans.

The table columns are the same as in the AI Spans view, with one additional column:
ColumnDescription
AI SpansThe number of AI spans included in that interaction (trace)

Selecting an interaction row opens a drill-down view that shows the same fields (evaluations, guardrail actions, tokens, cost, and so on) aggregated at the interaction level rather than the span level.

Review span attributes in the table

The AI Spans table provides a high-level view of LLM spans so you can scan, sort, and filter for outliers.
AttributeDescription
TimestampThe exact time the span occurred
ErrorsWhether the span includes errors
InputThe user input for the span
OutputThe LLM output for the span
TokensToken consumption during the span
CostEstimated span cost in USD
User IDThe user identifier associated with the span
Security issuesHigh-score security evaluations detected in the span
Quality issuesHigh-score quality evaluations detected in the span
DurationSpan duration
Guardrail actionThe Guardrails action taken for the span, such as Blocked

AI Explorer Interactions view showing spans with timestamp, input, output, tokens, user ID, security issues, quality issues, guardrail action, duration, and AI spans columns — with a Block action on the first row

The AI Explorer Interactions view lists every LLM span with inline security and quality issue labels, guardrail action badges (such as Block), token counts, and user identifiers, making it straightforward to identify problematic interactions at a glance.

Inspect a span in detail

Select a span row to open the span details panel. This view provides the full context of that LLM span and shows how it fits into the trace that processed the request.
Section in the span details panelDescription
Span IDThe span identifier and duration
Conversation segmentsSystem prompt, user prompt and response, detected evaluations including occurrences and scores, tool call details, tool response, and final AI response
MetadataTrace ID, models used, timestamp, estimated cost, token usage, and other span attributes
Activated policiesPolicies that evaluated or guarded the span, including evaluation scores and guardrail actions
ToolsTools invoked in the span, including tool names and parameters

AI Explorer span details panel showing the full conversation with user input tagged with evaluation results, system prompt, guardrail policy message, AI response, plus Metadata and Activated Policies panels listing evaluation scores and guardrail actions

The span details panel shows the full conversation alongside metadata and an Activated Policies panel that lists every evaluation result and guardrail action for that span — including individual scores and the action taken for each policy.

Investigate Guardrails actions

When a span includes a guardrail action, the span details panel shows the following information:

  • The detected issue type, such as PII.
  • The action taken, such as Blocked.

Use this view to confirm why a span was blocked and to validate that the correct policy triggered.

Understand evaluation results in span context

When you open a span, the span details show the evaluations that ran for that span and what they returned. Each evaluation includes the following information:

  • The evaluation name, such as restricted topics, toxicity, or prompt injection.
  • The score for the prompt or the response.
  • A severity label.
LabelDescription
HighThe score crossed the configured threshold. This is considered an issue. These results surface in AI Center dashboards.
LowThe score did not cross the high-severity threshold. This is considered no issue, or low severity.

Explore the full trace for end-to-end context

Each AI span belongs to a trace. Use trace exploration when you need full end-to-end context across services, tools, and downstream components.

To explore the full trace:

  1. In the span details panel, select Explore span to open the related trace in Explore Traces.
  2. Switch between visualizations, such as Gantt and flame graph views, to understand timing, dependencies, and bottlenecks.
  3. In the trace drilldown, review LLM attributes such as prompts and responses as span tags.

Explore Traces Gantt view for a trace containing guardrails spans — cx.guardrails.session, guardrails.prompt, and guardrails.response — alongside LLM and service spans across 2 services and 18 spans

Opening a span in Explore Traces reveals the full end-to-end trace, including guardrails spans such as cx.guardrails.session, guardrails.prompt, and guardrails.response, letting you see exactly where guardrail checks ran relative to the LLM call and other downstream services.

To investigate further using traces and spans:

  1. Use traces to identify abnormal requests. In the trace view, compare executions by time range, service, action, or flow to spot errors and latency outliers.
  2. Use spans to find the operation that explains the behavior. In the span view, compare operations across traces using RED signals and span attributes to determine whether the issue is isolated or widespread.
  3. Open the drilldown view to inspect details. Review timing, structure, dependencies, and related context in more detail. In the Info Panel, review LLM attributes such as prompts and responses as span tags.