Skip to content

Create dataset alerts for logs

Dataset alerts let you run log-based alerts on top of any logs-based dataset in your dataspace. Instead of evaluating every alert against default/logs, you can point the alert engine at the dataset that contains the signal you want to monitor. This helps you focus your alerting and keeps your workflows aligned with how your data is organized.

If you are new to datasets, see What is a dataspace and dataset?

When to use dataset alerts

Use dataset alerts when you want to:

  • Monitor logs that flow into a dedicated dataset
  • Track compliance or security activity stored in system datasets

What you need

Dataset alerts work with any logs-compatible dataset to which you have access. You need:

How dataset alerts work

When you select a dataset as your Datasource, the alert engine evaluates only the data stored in that dataset. The alert applies the same conditions, thresholds, and notification rules used in any other logs-based alert.

How the Datasource selector works in your alert definition

In the Query step of alert creation, you configure the dataset used for evaluation. The Datasources section displays the following description:

Datasets group related data within a dataspace, helping with query performance, access control, and data organization.

You then select:

  • Dataspace (for example, default or system)
  • Dataset (for example, logs, notification.deliveries, aaa.audit_events)

If a dataset does not appear, it might be:

  • Not logs-compatible
  • No write access enabled
  • You lack policy-based access

Example: Alert on failed notification deliveries using a system dataset

Imagine you manage a production environment where alerts notify teams across Slack, PagerDuty, and webhook endpoints. Recently, a few critical alerts fired but never reached the on-call engineer. The alert logic was correct, but issues occurred in the notification delivery pipeline.

To troubleshoot, you enable write access to the system/notification.deliveries dataset. This dataset captures every notification attempt, including:

  • Delivery status
  • Destination type
  • Response codes
  • Timestamps

You want to detect patterns such as:

  • Repeated Slack delivery failures
  • Spikes in 5xx webhook responses
  • Drops in successful PagerDuty deliveries

Instead of mixing these logs into default/logs, you evaluate the alert directly on this system dataset.

How to select a dataset for a logs-based alert

When you create or edit a logs-based alert, follow these steps to select which dataset the alert evaluates.

  1. Go to Alerts, then Alert definitions, and select Create alert.
  2. Scroll to the Query panel and find the Datasources section.
  3. In Dataspace, select where the dataset lives. Common options include:

    • default for standard application logs
    • system for system-level datasets such as audit events, alert history, or notification logs

    gif showing dataspace and datasource selection
    4. In Dataset, select the logs-based dataset you want the alert to evaluate. Examples include: - alerts.history - notification.deliveries - aaa.audit_events - labs.limitViolations - engine.queries 5. If the dataset you expect does not appear, it might be unavailable for one of several reasons, it might not be logs-compatible, might not be a streaming dataset, might not have write access enabled, or might be restricted by RBAC. Some datasets are intentionally exempt from alerting because they are not written in-stream. If you’re unsure why a dataset is missing, please reach out to Support. 6. In Search query, enter the query you want the alert to run. For example:

source system/notification.deliveries
| filter status = "failed"
| filter target_type = "slack"

What happens after you select a dataset?

  • The alert evaluates only the dataset you selected.
  • The alert does not use default/logs unless you select it explicitly.
  • Verify alert is skipped because dataset alerts do not rely on OpenSearch.

Field reference

Dataspace: The namespace that contains your datasets. Examples include default and system.

Dataset: The dataset the alert evaluates. Defaults to default/logs. Only logs-compatible datasets appear.

Query: DataPrime filters applied to the selected dataset.

Conditions: Thresholds and evaluation windows used to trigger the alert.

Notifications: Destinations and priorities used when the alert is triggered.

Using dataset alerts via the Alerts API

Dataset selection in the UI maps directly to the data_sources field in Alerts API v3. When creating or updating a logs-based alert, you can define the dataspace and dataset programmatically:

{
  "alert_def_properties": {
    "data_sources": [
      {
        "data_space": "system",
        "data_set": "notification.deliveries"
      }
    ]
  }
}

If you omit data_sources, the alert behaves exactly as before and is evaluated against default/logs.

This makes the feature fully backward compatible; no existing alerts or integrations need updating.

For the full API spec, see Alerts API v3 - Data sources.

Best practices

Use these guidelines when creating dataset alerts:

  • Select datasets that isolate exactly the data you want to monitor.
  • Review the dataset schema so your query matches its fields.
  • Add labels to organize alerts by service, team, or environment.

Dataset alerts help you keep alerting aligned with your data model. As you adopt dataspaces and datasets, they give you cleaner, more targeted, and more reliable alerting.

Was this helpful?