Threshold Alerts
As part of Coralogix Alerting, metric alerts monitor metric behavior and trigger notifications when predefined thresholds are met or exceeded. These alerts help you maintain system performance, reliability, and security.
Metric alerts evaluate specific metrics in your Coralogix dashboard and notify you when conditions cross the thresholds you configure. They monitor critical performance indicators, such as CPU utilization, response times, error rates, and resource usage in cloud environments, and provide early warning when values deviate from expected behavior.
You can create PromQL-based metric alerts for standard metric sources such as Prometheus and CloudWatch or for metrics extracted from logs through Events2Metrics
What do you need
- Ingest metrics into Coralogix.
- Define a PromQL query that returns the values you want to evaluate.
Create an alert
- Go to Alerts, then Alert management.
- Select Create alert.
- In Alert type, select Metric.
Add alert details
In the Details section:
- Enter the alert name.
- Enter the alert description, which appears in alert notifications.
- Add labels, or nest labels using
key:value. - Select Set as security alert to add the
alert_type:securitylabel so you can filter these alerts in the Incidents page.
Define the PromQL query
In the Query section:
- Enter the expression in Search query.
- Select PromQL Documentation if you need reference material.
Note
- The system displays auto-complete suggestions as you type.
- Aggregate values by application, subsystem, instance, or any metric label.
- Add labels to narrow a metric, for example:
- Use
by()to aggregate and control notification grouping, for example:
Configure conditions
Use the Conditions section to define when the metric alert triggers. The system evaluates each condition rule in priority order and triggers the alert when the highest-priority rule evaluates to true.
Select the alert condition type
In Alert when, select one of the following operators:
- Less than threshold
- Less than or equals threshold
- More than threshold
- More than or equals threshold
- More than usual (dynamic alert)
- Less than usual (dynamic alert)
Dynamic alerts use behavior-based baselines. All other operators evaluate static thresholds.
Set condition rules
Each rule follows this structure:
When the query returns:
<operator><number>for at least/at least once in/for over<time><unit>trigger a<severity>alert.
Select + Add condition rule to add more evaluation paths. The system evaluates rules by priority.
Threshold evaluation modes
Metric threshold alerts use three evaluation modes that define how long the threshold must remain true in the selected timeframe.
- for at least: The threshold must remain true for the entire timeframe with no interruptions. Example: If you configure more than 1 for at least 5 minutes, the metric must stay above 1 continuously for all 5 minutes.
- at least once in: The threshold must occur at least one time in the timeframe. Example: If you configure at least once in 10 minutes, the alert triggers if the metric crosses the threshold once in that 10-minute window.
- for over x%: The threshold must hold for more than a percentage of the timeframe. Example: If you configure for over 10% of 10 minutes, the metric must exceed the threshold for more than 1 minute.
Additional rules:
- 0%: the alert triggers if the threshold is crossed once.
- 100%: every value in the timeframe must meet the threshold.
Percentage requirements and missing values
Percentage requirements define how many data points must exist before the alert can trigger reliably.
- 0%: any breach can trigger the alert.
- 100%: every data point must exist and meet the threshold.
- When data is missing, the system calculates percentages from existing points unless you replace missing values with 0.
Why this matters
If you query 10 minutes of data and only 6 data points exist, those 6 points represent 100% of the timeframe unless the system replaces missing values with 0. This behavior can cause false triggers.
Replace missing values with 0
Enable Replace missing values with 0 to treat missing data points as 0. When enabled, the system hides percentage controls because 0-values guarantee full data coverage.
Advanced settings
Expand Advanced settings to modify evaluation behavior.
Delay alert evaluation
Delay evaluation by a set number of seconds to avoid triggering from ingestion delays or transient spikes.
Undetected values (Less than operators only)
The Undetected values section appears only when you use a Less than operator. Undetected values occur when a metric label permutation stops sending data. Without safeguards, these gaps can trigger repeated alerts.
Use the controls in this section to manage how the alert handles missing metric series:
- Enable triggering on undetected values to turn this behavior on or off.
- Auto retire to retire undetected values after a selected period (None, Never, 24h, 12h, 6h).
- Manual retirement while reviewing triggered alerts.
Preview the alert
Select Preview alert to view: - Query results from the past 24 hours - Threshold overlays - Up to 100 time series
Use this preview to confirm that the threshold logic behaves as expected.
Configure notifications
In the Notifications section:
- Notify every: Set the minimum time between notifications while the alert remains active. The system suppresses additional notifications until the interval passes.
- Notify when resolved: Send a notification when the alert condition clears.
- Grouped notifications: When the query returns multiple label combinations, select one of the following:
- Trigger a single alert when at least one combination meets the condition: Send one aggregated incident with all matching combinations.
- Trigger a separate alert for each combination that meets the condition: Select label keys to split notifications. The system sends one incident per combination.
Note
- Enter Group By keys as free text.
- The system evaluates up to 1,000 permutations and tracks only the first 1,000.
- Webhooks: Select + Add webhook to send notifications to Slack, PagerDuty, or custom endpoints.
Phantom mode
Toggle Enable Phantom mode to silence the alert. Phantom alerts do not create incidents or send notifications and work as components inside Flow alerts.
Schedule (optional)
Use Schedule to restrict when the alert can trigger, based on your local time.
View alert activity
Incidents page
Use the Incidents page to view active and historical alert events. Drill down into an event to review conditions, queries, and metric values. For more information, see Incidents.
Alert map
Use Alert map to view a real-time grid of all alert statuses. Go to Alerts, then Alert map. For more information, see Alert Maps
FAQs
How long does a new alert take to activate?
Expect each new alert to activate within 15 minutes, usually sooner.
How does the system define step intervals?
- Up to 30 minutes → 1-minute steps
- Up to 12 hours → 5-minute steps
- Over 12 hours → 10-minute steps
Why do I see missing values?
Your data might arrive late, or ingestion might be delayed.
How do I avoid false triggers from missing values?
Replace missing values with 0 or require 100% of the timeframe to contain data.
How do I debug an alert?
View the metric in Grafana or a Coralogix Custom Dashboard to check data completeness and ingestion timing.
What if 0 is a valid value?
Use a PromQL function that ensures the query returns a value instead of null.
Deprecation notice
Coralogix is deprecating Lucene-based metric alerts and will convert them automatically to PromQL as part of the transition from Logs2Metrics to Events2Metrics.
Support
Need help?
Our world-class customer success team is available 24/7 to walk you through your setup and answer any questions that may come up.
Contact us via our in-app chat or by emailing support@coralogix.com.

