Skip to content

No Data State

No-data handling defines what the alert should do when the query returns no results, and the system cannot evaluate the alert condition. You configure no-data handling when you create or edit a metric or log alert.

How no-data handling works

No-data handling defines how an alert behaves when its query returns no results and the system cannot evaluate the alert condition.

When an alert is evaluated:

  • If the query returns data, the alert evaluates normally
  • If the query returns no data, the configured no-data behavior is applied

The same no-data options apply across metric and log alerts. However, the UI might present these options in different places depending on the alert type and how the alert evaluates data.

Where to configure no-data handling

You configure no-data handling as part of an alert definition.

The alert’s Advanced settings contain the no-data options. These settings define how the alert behaves when data is missing, rather than how it evaluates thresholds or usual values.

Depending on the alert type, the UI displays no-data options alongside other evaluation-related settings, such as:

  • Threshold conditions
  • Baseline or “usual value” evaluation
  • Evaluation and lookback windows
  • State change behavior

No-data handling applies to alert types that rely on query results to determine their state, including:

  • Metric alerts with threshold conditions
  • Metric alerts that compare values against a usual or expected range
  • Log alerts with threshold conditions
  • Log alerts that evaluate logs against usual behavior

If an alert relies on query results for evaluation, it supports no-data handling.

Select a no-data behavior

When you configure an alert, you must decide how it should behave when no data is available. Each option represents a different assumption about what missing data means for your system.

no data selections

The following sections explain each option in detail, including when to use it and what happens when it is selected.

Set OK

Use Set OK when missing data is expected and does not indicate a problem.

This option is best suited for workloads that are intentionally silent or only emit data under specific conditions.

Typical scenarios

  • Batch jobs that emit metrics only while running
  • Services that intentionally scale to zero
  • Periodic tasks that do not produce continuous telemetry

Alert behavior

  • When no data is available, the alert transitions to the OK state
  • The alert does not indicate an issue while data is missing

Things to consider

  • If telemetry stops unexpectedly, this option can hide real issues
  • Use this option only when you fully expect and understand periods with no data

Set alerting

Use Set alerting when missing data is likely a sign of a problem.

This option is appropriate for systems that should always emit telemetry and where silence is suspicious.

Typical scenarios

  • Infrastructure metrics (CPU, memory, disk)
  • Core services that must always be running
  • Critical pipelines where missing data likely indicates a failure

Alert behavior

  • When no data is available, the alert transitions to the alerting state
  • Missing data is treated the same as a breached alert condition

Things to consider

  • This option can generate alerts during telemetry outages
  • Use it when data availability is as important as the metric value itself

Keep last state

Use Keep last state when short or intermittent gaps in data are common and not meaningful.

This option prevents alert state changes caused by brief interruptions in data ingestion.

Typical scenarios

  • Temporary network issues
  • Short ingestion delays
  • Metrics scraped at irregular intervals

Alert behavior

  • When no data is available, the alert remains in its previous state
  • The alert resumes normal evaluation when data returns

Things to consider

- Long gaps in data will not change alert state
- This option prioritizes stability over immediate visibility into missing data

Set no data state

Use Set no data state when you want to explicitly track missing data as its own condition.

This option is ideal when missing data is neither OK nor alerting, but still important to observe and act on.

Typical scenarios

  • Telemetry and platform monitoring
  • Detecting broken exporters or agents
  • Identifying misconfigured queries or missing labels after deployments

Alert behavior

  • When no data is available, the alert enters a no data state
  • The no-data state is visible in the UI and alert timelines
  • No-data alerts are routed separately using routing rules

Things to consider

  • This option provides the clearest visibility into issues with data availability
  • It allows you to distinguish between "system is unhealthy" and "system is silent"

Important considerations

  • Set OK can hide real issues if telemetry stops unexpectedly. Use this option only when you fully expect and understand missing data.
  • Set no data state offers the clearest way to distinguish missing data from active alert conditions.
  • Review no-data behavior whenever you change queries, labels, scaling behavior, or ingestion pipelines.

How no-data behaves across alert types

No-data handling behaves consistently across alert types. What changes is how each alert determines no data, based on its evaluation logic.
Alert evaluation typeExamplesWhen no data applies
Threshold-based evaluationLess than, greater than, equalsThe query returns no time series to evaluate against the threshold
Baseline-based evaluationMore than usual, less than usualThe alert lacks enough data to calculate a usual value
Event-based evaluationNotify immediatelyThe query returns no results during the evaluation window
Aggregation-based evaluationCount, rate, sum over timeThe aggregation receives no input during evaluation
Filter-based evaluationLabel- or attribute-filtered queriesFilters match no time series or log events

This consistency lets you apply the same no-data strategy even when alert evaluation logic differs.

What happens when data returns

When valid data becomes available again, the alert resumes normal evaluation automatically.

  • The alert exits the no-data condition (if you selected Set no data state)
  • The system evaluates new data using the configured thresholds or usual values
  • The alert transitions to OK or alerting based on the new results

You do not need to take any manual action.

Example timeline

This example shows how an alert behaves during and after a no-data period:

  1. The alert evaluates normally and enters the alerting state.
  2. The query stops returning results, and the alert follows the configured no-data behavior.
  3. Data resumes, and the query returns valid results.
  4. The alert reevaluates and transitions to:
    • OK, if the condition no longer applies
    • Alerting, if the condition still applies

How evaluation windows apply

When data returns, the alert evaluates only new data using the configured lookback and evaluation windows. The system does not replay missing data.

This approach ensures alert decisions reflect current system behavior rather than historical gaps.

Avoiding alert flapping

Short interruptions in data do not immediately trigger repeated state changes. Choosing Keep last state or Set no data state helps reduce unnecessary transitions when data briefly disappears and then recovers.

Visibility of no-data states

When you select Set no data state, the alert enters a distinct no-data state that appears in alert timelines and status views.

This distinction helps you tell the difference between:

  • Alerts that fire because conditions are met
  • Alerts that the system cannot evaluate because data is missing

No-data states affect alert evaluation and state transitions only. They do not change notification behavior or routing.

When to review no-data settings

Review no-data behavior regularly to ensure it still matches how your systems operate.

Revisit no-data settings when:

  • You add, remove, or change labels in metric or log queries
  • You change the deployment or scaling behavior
  • Alerts remain unexpectedly quiet
  • Alert timelines show frequent transitions into and out of no-data states

Clear no-data configuration makes alert behavior predictable, reduces confusion during incidents, and simplifies troubleshooting when data goes missing.