Skip to content

Headers and Quick Actions

Overview

In the Trace drilldown view, trace and span headers surface the key context you need to decide what to investigate next-service, operation, status, duration, and percentile (how the duration compares to baseline). They also provide quick actions directly on those fields, so you can pivot to correlated evidence without leaving the drilldown. For example, the timestamp menu opens Surrounding Explore Logs for the same time window, the service menu pivots to APM Service Catalog or Profiles, and Include/Exclude in query turns a header value into a filter to find other traces or spans with the same pattern. Related traces are traces that share key attributes (such as service, operation, or error signature), helping you confirm whether an issue is isolated or recurring without rebuilding queries or switching views.

Headers help you:

  • Understand what you’re investigating at a glance
  • Spot slow or failing behavior quickly
  • Correlate to logs and service context using the header values as keys
  • Refine the dataset by including or excluding attributes from the header

Common use cases

  • Use the trace header to quickly determine if a request is slow or failing and whether it’s an outlier (percentile).
  • Use the span header to isolate the specific operation or service-to-service call responsible, then pivot to logs, APM context, or profiles to confirm the cause.

Trace header

The Trace header summarizes the entire distributed request lifecycle—where it started, what it did, whether it succeeded, and how it performed relative to baseline. Use it to quickly confirm impact (slow/failing) and scope (how complex and widespread the request is) before drilling into individual spans.

Trace header

Service, operation, and status

  • Service name: The entry-point service for the trace (where the request began).
  • Operation: The request name shown as a method + action (for example, GET /checkout). This helps you tie the trace to a real endpoint or business action.
  • Typical methods/actions include HTTP (GET, POST), DB (SELECT, UPDATE), messaging (PUBLISH, CONSUME), cache (GET, SET), or custom operations (for example, processJob).
  • Status code: The outcome of the trace’s root request. It tells you immediately whether the trace completed successfully or failed.

Status codes are color-coded: green (2xx), yellow (1xx, 3xx, 4xx), red (5xx).
CategoryColorRangeIncludesMeaning / Use Case
🟢 SuccessGreen2xx200 OK, 201 Created, 202 Accepted, 204 No Content, etc.Request succeeded and returned successfully.
🟡 Informational / Redirect / ClientYellow1xx, 3xx, 4xx100 Continue, 301 Redirect, 304 Not Modified, 404 Not Found, 408 Timeout, 429 Too Many RequestsInformational responses, redirects, or client-side outcomes (including rate limits/timeouts).
🔴 ErrorRed5xx500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, 504 TimeoutRequest failed due to a server-side error.

Use this to confirm whether the trace represents a failure and whether the root request is the likely starting point.

Duration and percentile

  • Total duration: End-to-end time from trace start to finish.
  • Duration percentile: The percentile bucket this trace falls into compared to similar requests (for example, 95th), which helps you understand how unusual the duration is relative to baseline.

Percentiles are calculated against traces with matching service + operation within the last 6 hours. In the trace header, the percentile is calculated based on the root span (the request entry point).

Use this to quickly identify slow outliers and assess whether the request may contribute to SLO or latency issues.

Identifiers and scope

  • Trace ID: Unique identifier for the request.
  • Start date/time: When the trace started (includes time zone) for correlation with logs and other telemetry.
  • Service count: How many services participated in the trace.
  • Span count: How many spans make up the trace.
  • Trace depth: The maximum nesting level of spans.

Use this to assess structural complexity and how broadly the request propagated across services.

Span header

The Span header summarizes the selected operation within the trace—its role in the flow, the dependency edge it represents, and whether it is a likely source of latency or failure. Use it to isolate responsibility and decide where to pivot next (logs, profiling, or service context).

Span header

Span kind and direction (how to read the header)

The span header maps OpenTelemetry span.kind values to a simplified set of labels and arrows. Use the label to understand the span’s role (entry, exit, async, or local work) and the arrow to understand direction (who calls whom).
Header labelTooltip meaningCanonical span.kindWhat it means in practiceService A (left)Service B (right)Arrow shown in header
IncomingHandles inbound requestsSERVEREntry span: request comes into this serviceThis service (receiver)Caller service (upstream / parent)← inbound
OutgoingMakes an outbound requestCLIENTExit span: this service calls out to another serviceThis service (caller)Downstream service (callee) (endpoint from server.address / server.port)→ outbound
Async OutPublishes a messagePRODUCERService publishes a message/event to broker/topic/queueThis service (publisher)Broker / topic / queue (for example messaging.destination.name)→ publish
Async InReceives a messageCONSUMEREntry for async work: service receives work from broker/topic/queueThis service (consumer)Broker / topic (upstream / parent)← consume
InternalPerforms work within the serviceINTERNALLocal in-process work (no service boundary)This service only↺ local loop

How to use this:

  • Use the label to recognize whether you’re looking at an entry/exit span, async boundary, or local work.
  • Use arrow direction to decide whether to investigate the current service or a downstream dependency.
  • Outgoing (CLIENT) can be matched to a corresponding server span when available (commonly parent_span_id == client_span_id).
  • Async Out (PRODUCER) typically shows the broker/topic/queue. The consumer service is only shown when consumer spans and context propagation allow correlation (often via span links).

Service-to-service relationship (dependency edge)

The header shows the relationship edge for the selected span (who is calling whom) based on direction and available peer-service attributes.

Example: frontend → payment

Use this to identify which dependency is involved and where failures or latency may be introduced across a service boundary.

Missing or incomplete edges Sometimes the peer side can’t be resolved. In those cases, the header still shows the best-known edge so you can continue investigating:

  • No peer service: there are no remote service attributes; only the current service is known.
  • Unknown remote service: peer attributes exist but don’t resolve to a known service. The header displays: Service A → Unknown service.
  • Client spans without a matching server span: the outgoing call is instrumented, but the downstream service is not (or context isn’t propagated), so the server-side span is missing.

When you see Unknown service or a missing edge, treat it as a signal to validate instrumentation/context propagation and pivot to logs or infrastructure context to identify the remote dependency.

Operation and status

  • Operation: Method + action (for example, POST /api/checkout) tied to the selected span.
  • Status code: The outcome of the span’s operation, using the same color coding as the trace header.

Use this to identify where an error originates and distinguish root failure from downstream impact.

Duration and percentile

  • Span duration: The elapsed time for this operation.
  • Duration percentile: The percentile bucket for this span compared to similar spans (same service + operation), which helps identify abnormal behavior for this specific operation. It uses the same color coding as the trace header.
  • Contribution to trace: A span that consumes a high percentage of the total trace duration is often a strong candidate for investigation.

Use this to find bottlenecks (high contribution) and confirm whether the operation is an outlier (high percentile).

Identifiers and timing

  • Span ID: Unique identifier for the operation.
  • Start date/time: When the span started (includes time zone) for correlation with logs and other telemetry.

Use this to correlate precisely with surrounding logs and time-based debugging.

When to use Span vs. Trace header

Trace HeaderSpan Header
Represents the full request lifecycleRepresents a single operation
Focuses on overall health and scopeFocuses on responsibility and bottlenecks
Displays total duration and structural complexityDisplays relative duration and role
Used to assess impactUsed to isolate root cause
  • Trace header = impact and scope
    • "Is this request unhealthy or slow overall?"
    • "How complex/widespread is it?"
  • Span header = responsibility and root cause
    • "Which specific operation caused the slowdown or error?"
    • "Is the issue local to this service or across a dependency boundary?"

Use the trace header to determine if a request is problematic.

Use the span header to determine where and why it is problematic.

Quick actions (Trace and Span headers)

Trace and span headers provide the same categories of quick actions.

The difference is the scope:

  • Trace actions apply to the entire distributed request.
  • Span actions apply only to the selected operation.

Where to find quick actions (applies to both)

You can reach quick actions from:

  • The headers: the primary entry point for investigation actions.
  • The timestamp field menu: for time-centered actions like surrounding logs.
  • Some field chips/values (like IDs): for copy and query refinement.

Actions from the Service name

Where: Select the service name in the header (for example, loadgenerator) to open the service context menu.

Service name actions

Why: These actions help you pivot from "I found an interesting request/span" to "show me everything relevant for this service".

  • Use Copy to clipboard to share the service name.
  • Use View Logs Explore to validate why the service behaved this way (errors, exceptions, timeouts).
  • Use Explore similar traces/spans to open the Explore tracing page with the similar traces or spans listed.
  • Use APM Service Catalog to drill into the specific operation under that service + time frame.
  • Use View Profiles when traces show slow execution but logs don’t explain it.
  • Use Include/Exclude in query to refine results in Explore tracing based on the service you’re investigating.

Difference in scope

  • Trace: pivots and filters reflect the overall request context.
  • Span: pivots and filters reflect the specific operation/dependency boundary you selected.

Actions from Trace ID / Span ID

Where: Open the menu from the Trace ID (trace header) or Span ID (span header).

Trace / Span ID actions

Why: IDs are the fastest way to anchor an investigation:

  • Use Copy to clipboard to share the exact trace/span with teammates or attach it to an incident ticket.
  • Use Open in new tab to isolate the view in a new tab.
  • Use Explore similar traces/spans to open the Explore tracing page with the similar traces or spans listed.
  • Use Include in query to focus your dataset on the same entity when validating patterns.
  • Use Exclude from query to remove known items when reducing noise.

Difference in scope

  • Trace ID filters isolate the entire request instance.
  • Span ID filters isolate a single operation instance inside a trace.

Actions from the Timestamp

Where: Open the menu from the timestamp chip/value in the header.

Timestamp actions

Why: Timestamp actions are for correlation and evidence gathering:

  • Use Copy Timestamp to copy the full timestamp to use in logs or messages.
  • Use View Surrounding Explore Logs to pull the most relevant logs around the trace/span execution time (exceptions, retries, dependency timeouts).
  • Use View Surrounding Explore Traces/Spans to pull the most relevant traces/spans for the selected time frame.
  • Use Copy as Epoch to correlate quickly with systems that require epoch timestamps (dashboards, external tools, incident timelines).

Difference in scope

  • Trace timestamp actions correlate to the overall request timeframe.
  • Span timestamp actions correlate to the exact operation timeframe, which is usually more precise for root cause validation.

Actions from the Operation name

Where: Select the Operation name in the header row (available for both trace and span headers).

Operation name actions

Why: These actions help you pivot from "I found an interesting request/span" to "show me everything relevant for this service".

  • Use Copy to clipboard to share the operation name.
  • Use View Logs Explore to validate why the operation behaved this way (errors, exceptions, timeouts).
  • Use Explore similar traces/spans to open the Explore tracing page with the similar traces or spans listed.
  • Use APM Service Catalog Operation to view this operation and its details in an APM context.
  • Use Include/Exclude in query to refine results in Explore tracing based on the service you’re investigating.

Difference in scope

  • Trace: pivots and filters reflect the overall request context.
  • Span: pivots and filters reflect the specific operation/dependency boundary you selected.

Actions from Export

Where: In the trace header, select Export (top-right).

Export actions

Why: Export lets you take the exact table results you’re viewing and analyze them outside the UI (for example, calculate averages/percentiles and track improvements over time). The export respects your current filters and time range, so what you download matches what you investigated.

  • You can export Trace CSV or Span JSON for quick sharing, review, and analysis.
  • If a value doesn’t exist, the cell is exported as empty.

Use headers to guide a structured investigation.

1. Start at the trace header

  • Check status code.
  • Review total duration and percentile.
  • Assess service count and scope.

Determine whether the trace represents an anomaly.

2. Move to span headers

  • Identify spans with high duration percentage.
  • Locate the first error status.
  • Review service-to-service direction.

Determine which operation is responsible.

3. Correlate with logs

  • Use surrounding logs to validate errors.
  • Confirm stack traces or timeout conditions.

4. Validate recurrence

  • Explore similar traces or spans.
  • Open APM operation metrics.
  • Check percentile behavior.

5. Refine the query

  • Include or exclude attributes.
  • Narrow results to failing patterns.
  • Remove irrelevant traffic.

Using header information and quick actions reduces context switching and accelerates time to root cause.

Troubleshooting

Can’t see logs, Profiling, or Infrastructure? Refer to the Troubleshooting documentation.