Skip to content

Span Related Data

Overview

When you investigate a slow request or a failing operation, a span on its own rarely tells the full story. Span Related Data brings the most relevant signals—logs, events, profiling, and infrastructure context-into the same drilldown so you can move from "something is wrong" to "here’s why" without jumping between tools.

Availability of tabs depends on the selected level (span/trace/service) and enabled data related.

Related data is designed for fast troubleshooting:

  • Confirm what failed (Logs / Errors).
  • Understand what changed (Events).
  • Understand where time was spent in code (Profiling).
  • Infrastructure tells you whether the environment contributed.
  1. Go to Explore tracing.
  2. Select a trace or a span.
  3. Navigate to the bottom on the page to view Related data.
  4. Use the tabs on the left to switch between Logs, Infrastructure, Profiling, and Events.

Open related data

Tip

If you need more space, you can maximize the drilldown and minimize it again when you’re done.

Logs

Use this tab to quickly answer: "What failed, and what happened right before it?" without leaving the trace investigation flow.

How correlation works

Logs are populated automatically based on:

  • Trace and span identifiers (for example, trace_id, span_id).
  • Service context (for example, service name / subsystem).
  • Timeframe based on a trace duration.

If your data is instrumented with OpenTelemetry (OTel) and context propagation is set up correctly, correlation typically works automatically when standard OTel fields are present. Coralogix maps common OTel fields internally so the UI can show your original field names while correlation still works behind the scenes.

If a tab is empty, it usually means one of the correlation inputs (trace or span IDs or service context) is missing or not being extracted.

What you can do here

  • Switch between Service / Trace / Span scope to control how wide the log context is.
  • Filter by logs severity: Debug, Verbose, Info, Warning, Error, Critical
  • Search within the correlated log results.
  • Open Explore logs to continue investigation in the full logs experience.
  • Manage columns to choose which fields appear in the logs table (for example, timestamp, severity, and key attributes). This helps you focus on the fields you need while investigating.
  • Use the menu to:
    • View query (see the correlation query used).
    • Setup correlation (configure correlation keys when needed).

Related Data - Logs

Suggested investigation path

  1. Start with Errors with Critical severity in Span scope to see the most relevant failures.
  2. Move to Trace scope if you suspect the error is downstream or part of a chain.
  3. Switch to Service scope to understand whether the issue is systemic during the same timeframe.

View the query used for correlation

If you want to understand exactly how the logs are being pulled in, open the : menu and select View query.

Set up log correlation

In most cases, you don’t need to configure anything when using OTel as logs include the standard identifiers. If you don’t see the logs you expect, you can set up correlation from the Logs tab.

When you should configure correlation

  • You see no logs in Span or Trace scope, even though you know logs exist.
  • Your logs don’t include OTel-standard fields (or the values appear in a different field name).
  • You’re not using OpenTelemetry and need to correlate using your own log fields.

For best results, logs should include at least one of the following:

  • trace_id / trace.id
  • span_id / span.id
  • Service context, such as subsystem (or an equivalent service identifier)

With these present, Coralogix can correlate logs to spans and traces automatically.

Option 2: Choose correlation keys (non-OTel)

  1. In the Logs tab, open the menu.
  2. Select Setup correlation.
  3. Choose the log fields that best link your logs to traces/spans (for example, trace ID, span ID, service identifiers).
  4. Save your selection.

Guidance:

  • Pick stable, high-signal fields (IDs and service identifiers are best).
  • Keep it small—select up to 10 keys.

Option 3: Create a parsing rule (when IDs exist but aren’t extracted)

If your trace/span IDs are present in the raw log message but not extracted into fields, correlation can’t work reliably until they’re parsed.

In that case, create a parsing rule that extracts trace_id, span_id, and service context into structured fields. After the fields are extracted, Related data can correlate logs automatically.

Infrastructure

Use this tab to connect a span to where it ran and validate whether resource pressure contributed to the issue.

What you can do here

  • Review the infrastructure context for the span, including host/process/pod/container identifiers and Kubernetes context (namespace, node, cluster).
  • Review correlated infrastructure metrics for the same time window, such as:
    • CPU usage / throttling
    • Memory pressure
    • Restarts
    • Node pressure
    • Network latency, packet drops, retries
  • Spot relationships and patterns using the infrastructure view (for example, host ↔ pod ↔ service) and jump to deeper infrastructure views using View Infrastructure.

Related Data - Infra

Suggested investigation path

  1. Confirm where the span ran (pod/node/host) and validate the Kubernetes context (namespace/cluster).
  2. Check for obvious anomalies during the same timeframe:
    • high CPU, memory pressure
    • pod restarts
    • node pressure / noisy neighbors
  3. Validate network impact (latency, dropped packets, retries) if the issue looks like timeouts or intermittent slowness.

Profiling

Use this tab to answer: "Where was time spent in code?"—especially when latency is high but logs don’t explain why.

What you can do here

  • Review CPU/core activity during the trace timeframe (grouped by service context).
  • Inspect profiles (for example, Split/Flame/Table views, depending on what’s enabled).
  • Open View Profiling to pivot to the full profiling experience with the same context and timeframe.

Related Data - Profiling

Suggested investigation path

  1. Use Profiling to confirm hotspots, contention, or expensive functions during the trace window.
  2. Correlate what you see here with the Logs tab to connect a slow path to a specific error or deployment change.

Events

Use this tab to understand "What changed or what occurred during this span/trace" (for example, domain events, operational events, or meaningful attributes captured during execution).

What you can do here

  • Switch between Span or Trace scope (Span is usually the tightest context).
  • Search within events.
  • Review event attributes alongside timestamps to understand sequence and impact.

Related Data - Events

Suggested investigation path

  1. Stay in Span scope when you want the tightest context.
  2. Switch to Trace scope when you want to see related events across the whole request.

Troubleshooting

Some tabs may not appear if the data is not enabled in your account. If Related data is empty (or incomplete), start here.

Logs are missing

  • Confirm logs include trace_id / span_id (or equivalent fields) and service context.
  • If IDs exist but aren’t extracted, create a parsing rule to extract them.
  • If you’re not using OTel, configure correlation keys (up to 10) in Setup Correlations.

Infrastructure is missing

  • Confirm spans include resource attributes (Kubernetes/host tags).
  • Validate your instrumentation exports resource metadata consistently.

Profiling is missing

  • Confirm profiling is enabled for the service.
  • Make sure the service name and timeframe align with the trace window.
  1. Start with Logs → Errors to find the failure signal fast.
  2. Check Events to see what changed around the same time.
  3. Use Profiling to determine whether latency was caused by CPU work, contention, or internal code execution.
  4. Use Infrastructure to validate runtime impact and resource pressure.