Skip to content

Span Related Data

Overview

When you investigate a slow request or a failing operation, a span on its own rarely tells the full story. Span Related Data brings the most relevant signals—logs, events, profiling, infrastructure context, and AI session—into the same drilldown so you can move from "something is wrong" to "here’s why" without jumping between tools.

Availability of tabs depends on the selected level (span/trace/service) and enabled data related.

Related data is designed for fast troubleshooting:

  • Confirm what failed (Logs / Errors).
  • Understand what changed (Events).
  • Understand where time was spent in code (Profiling).
  • Infrastructure tells you whether the environment contributed.
  • Troubleshoot AI behavior when the span represents a GenAI/LLM operation (AI Session).
  1. Go to Explore Tracing.
  2. Select a trace or a span.
  3. Navigate to the bottom on the page to view Related data.
  4. Use the tabs on the left to switch between Logs, Events, Profiling, Infrastructure, and AI Session (when available).

Open related data

Tip

If you need more space, you can maximize the drilldown and minimize it again when you’re done.

Logs

Use this tab to quickly answer: "What failed, and what happened right before it?" without leaving the trace investigation flow.

How correlation works

Logs are populated automatically based on:

  • Trace and span identifiers (for example, trace_id, span_id).
  • Service context (for example, service name / subsystem).
  • Timeframe based on a trace duration.

If your data is instrumented with OpenTelemetry (OTel) and context propagation is set up correctly, correlation typically works automatically when standard OTel fields are present. Coralogix maps common OTel fields internally so the UI can show your original field names while correlation still works behind the scenes.

If a tab is empty, it usually means one of the correlation inputs (trace or span IDs or service context) is missing or not being extracted.

What you can do here

  • Filter the logs to control how wide the log context is:
    • Span
    • Trace
    • Service (Trace-linked): Shows logs that are directly linked to the selected trace.
    • Service: Shows all logs from the selected service without any trace or span correlation.
  • Filter by logs severity: Debug, Verbose, Info, Warning, Error, Critical.
  • Search within the correlated log results.
  • Open Explore logs to continue investigation in the full logs experience.
  • Manage columns to choose which fields appear in the logs table (for example, timestamp, severity, and key attributes). This helps you focus on the fields you need while investigating.
  • Use the menu to:
    • View query (see the correlation query used).
    • Setup correlation (configure correlation keys when needed).

Related Data - Logs

Suggested investigation path

  1. Start with Errors with Critical severity in Span scope to see the most relevant failures.
  2. Move to Trace scope if you suspect the error is downstream or part of a chain.
  3. Switch to Service scope to understand whether the issue is systemic during the same timeframe.

View the query used for correlation

If you want to understand exactly how the logs are being pulled in, open the : menu and select View query.

Set up log correlation

In most cases, you don’t need to configure anything when using OTel as logs include the standard identifiers. If you don’t see the logs you expect, you can set up correlation from the Logs tab.

When you should configure correlation

  • You see no logs in Span or Trace scope, even though you know logs exist.
  • Your logs don’t include OTel-standard fields (or the values appear in a different field name).
  • You’re not using OpenTelemetry and need to correlate using your own log fields.

For best results, logs should include at least one of the following:

  • trace_id / trace.id
  • span_id / span.id
  • Service context, such as subsystem (or an equivalent service identifier)

With these present, Coralogix can correlate logs to spans and traces automatically.

Option 2: Choose correlation keys (non-OTel)

  1. In the Logs tab, open the menu.
  2. Select Setup correlation.
  3. Choose the log fields that best link your logs to traces/spans (for example, trace ID, span ID, service identifiers).
  4. Save your selection.

Guidance:

  • Pick stable, high-signal fields (IDs and service identifiers are best).
  • Keep it small—select up to 10 keys.

Option 3: Create a parsing rule (when IDs exist but aren’t extracted)

If your trace/span IDs are present in the raw log message but not extracted into fields, correlation can’t work reliably until they’re parsed.

In that case, create a parsing rule that extracts trace_id, span_id, and service context into structured fields. After the fields are extracted, Related data can correlate logs automatically.

Infrastructure

Use this tab to connect a span to where it ran and validate whether resource pressure contributed to the issue.

What you can do here

  • Review the infrastructure context for the span, including host/process/pod/container identifiers and Kubernetes context (namespace, node, cluster).
  • Review correlated infrastructure metrics for the same time window, such as:
    • CPU usage / throttling
    • Memory pressure
    • Restarts
    • Node pressure
    • Network latency, packet drops, retries
  • Spot relationships and patterns using the infrastructure view (for example, host ↔ pod ↔ service) and jump to deeper infrastructure views using View Infrastructure.

Related Data - Infra

Suggested investigation path

  1. Confirm where the span ran (pod/node/host) and validate the Kubernetes context (namespace/cluster).
  2. Check for obvious anomalies during the same timeframe:
    • high CPU, memory pressure
    • pod restarts
    • node pressure / noisy neighbors
  3. Validate network impact (latency, dropped packets, retries) if the issue looks like timeouts or intermittent slowness.

Profiling

Use this tab to answer: "Where was time spent in code?"—especially when latency is high but logs don’t explain why.

What you can do here

  • Review CPU/core activity during the trace timeframe (grouped by service context).
  • Inspect profiles (for example, Split/Flame/Table views, depending on what’s enabled).
  • Open View Profiling to pivot to the full profiling experience with the same context and timeframe.

Related Data - Profiling

Suggested investigation path

  1. Use Profiling to confirm hotspots, contention, or expensive functions during the trace window.
  2. Correlate what you see here with the Logs tab to connect a slow path to a specific error or deployment change.

Events

Use this tab to understand "What changed or what occurred during this span/trace" (for example, domain events, operational events, or meaningful attributes captured during execution).

What you can do here

  • Switch between Span or Trace scope (Span is usually the tightest context).
  • Search within events.
  • Review event attributes alongside timestamps to understand sequence and impact.

Related Data - Events

Suggested investigation path

  1. Stay in Span scope when you want the tightest context.
  2. Switch to Trace scope when you want to see related events across the whole request.

AI Session

Use this tab when the selected span represents a GenAI or LLM operation. It surfaces the associated AI chat context directly in the trace investigation flow, so you can troubleshoot AI behavior without leaving the drilldown.

The AI Session tab displays the chat from the trace and highlights the specific span each time. It surfaces important metadata fields, evaluation scores (Detect Evals), and tool calls — all within the context of the trace. This allows engineers to debug AI behavior in the same place as their backend traces, without switching tools or losing context.

AI Session tab

How correlation works

The AI Session tab appears when a span includes Gen-AI tags that follow the OpenTelemetry Gen-AI Semantic Conventions. Coralogix uses these tags to correlate spans to their associated AI interactions. The specific tags used for correlation may evolve over time as the conventions mature. There are no strictly required tags — the more Gen-AI attributes present, the richer the correlation. Common tags include gen_ai.system, gen_ai.operation.name, gen_ai.model, and gen_ai.request.id.

What you can do here

  • Understand what the AI model received and returned.
  • Identify incorrect or unexpected outputs.
  • Analyze token usage and cost.
  • Investigate tool calls and agent behavior.
  • Validate response quality using evaluation scores.
  • View AI chats linked to the selected span.
  • Review Metadata for each chat: Model, Estimated cost, Timestamp, Tokens, User ID, AI Spans.
  • Review Detect Evals to see evaluation scores. Select Show more to view additional evaluations.
  • Review Tools to see the functions called by the AI application during the interaction.
  • Select View AI Center to navigate directly to the AI Center for deeper investigation.

Suggested investigation path

  1. Review the chat content to understand what the AI model received and produced.
  2. Check metadata (model, tokens, cost) to validate whether the operation behaved as expected.
  3. Review Detect Evals for quality signals and anomalies.
  4. Select View AI Center to continue investigation with the full AI Center experience.

Troubleshooting

Some tabs may not appear if the data is not enabled in your account. If Related data is empty (or incomplete), start here.

Logs are missing

  • Confirm logs include trace_id / span_id (or equivalent fields) and service context.
  • If IDs exist but aren’t extracted, create a parsing rule to extract them.
  • If you’re not using OTel, configure correlation keys (up to 10) in Setup Correlations.

Infrastructure is missing

  • Confirm spans include resource attributes (Kubernetes/host tags).
  • Validate your instrumentation exports resource metadata consistently.

Profiling is missing

  • Confirm profiling is enabled for the service.
  • Make sure the service name and timeframe align with the trace window.

AI Session is missing

  • Confirm the span includes Gen-AI tags such as gen_ai.system, gen_ai.operation.name, gen_ai.model, and gen_ai.request.id.
  • Confirm the span is correlated to an AI chat in the AI Center.
  • For navigation to work, verify that gen_ai.conversation.id or gen_ai.session.id is present on multi-step interactions.
  1. Start with Logs → Errors to find the failure signal fast.
  2. Check Events to see what changed around the same time.
  3. Use Profiling to determine whether latency was caused by CPU work, contention, or internal code execution.
  4. Use Infrastructure to validate runtime impact and resource pressure.
  5. Use AI Session to troubleshoot AI behavior—then navigate to AI Center if needed.