This guide will help you use our alerts API to define, query, and manage your alerts.
New! Alerts API now supports our expanded Notification Group By alert settings:
Select the Management Endpoint associated with your Coralogix domain.
Notes:
STEP 1. In your navigation pane, click Data Flow > API Keys.
STEP 2. Click GENERATE NEW API KEY in the section entitled Alerts, Rules and Tags API Key. Copy the new key.
Note: Only admins are authorized to view this API Key.
STEP 3. The generated Alerts, Rules and Tags API Key must be added to the header of each HTTP request to the API. Configure it as a ‘Bearer Token’.
Authorization: bearer YOUR_API_KEY.
STEP 4. Define the header Content-Type as application/json.
Content-Type: application/json
Parameter | Description | Type | Note |
---|---|---|---|
name | Alert Type | string | |
severity | Alert Severity | string | must be one of the following options: [“info”, “warning”, “critical”] |
is_active | A boolean that determines whether the alert is active or not | boolean | |
log_filter | An object that represents the filter definition of the alert | object | Each property of that object is described below |
log_filter.text | The query_string query that we wanted to be notified on. can be in an “advanced” format for more accurate results. | string (case sensitive) | |
log_filter.category | An array that contains log’s categories that we want to be notified on | array | |
log_filter.filter_type | Type of the log filter | string | must be one of the following options: [“text”, “ratio”, “unique_count”, “relative_time”, “metric”] For ‘new_value’ alerts the value should be “text” |
log_filter.severity | An array of log severities that we interested in | array | Must be chosen from the following options: [“debug”, “verbose”, “info“, “warning”, “error“, critical”] |
log_filter.application_name | An array of strings that includes the different matches for application names filters | array | There are 4 types of matching criteria: ‘Is’, ‘Start With’, ‘Includes’, ‘Ends With’ E.g. log_filter.application_name: [“production”, “filter:startsWith:prod”, “filter:contains:duct”, “filter:endsWith:ion”] |
log_filter.subsystem_name | An array of strings that includes the different matches for subsystem names filters | array | There are 4 types of matching criteria: ‘Is’, ‘Start With’, ‘Includes’, ‘Ends With’ E.g. log_filter.subsystem_name: [“my-nice-app”, “filter:startsWith:my”, “filter:contains:nice”, “filter:endsWith:app”] |
log_filter.computer_name | An array that contains log’s computer names that we want to be notified on | array | |
log_filter.class_name | An array that contains log’s class names that we want to be notified on | array | |
log_filter.ip_address | An array that contains log’s IP addresses that we want to be notified on | array | |
log_filter.method_name | An array that contains log’s method names that we want to be notified on | array | |
log_filter.alias | A string of text for Query 1 alias. Relevant to ratio alerts only | string | |
log_filter.ratioAlerts | An array containing an object representing Query 2 for ratio alerts | array | Keys supported in Query2 object are: application_name subsystem_name severity text alias group_by – array of strings with group by fields. If the root alert has defined group_by it has to be the same or empty. If root alert doesn’t have group_by you can choose specific group_by for Query 2 .The description of the keys is same as above |
condition | An object that specifies a condition for triggering the alert | object or string with null value that represents ‘immediate’ alert | Each property of that object is described below |
condition.condition_type | Type of the condition | string | must be one of the following options: [‘less_than’, ‘more_than’, ‘more_than_usual’, ‘new_value’] For ‘unique_count’ alerts, the value should be “more_than” For ‘metric’ alerts, the value can be one of [‘less_than’, ‘more_than’] |
condition.unique_count_key | Unique count key name | string | Select from list of keys. |
condition.threshold | The number of log occurrences that is needed to trigger the alert | number | |
condition.timeframe | The bounded time frame for the threshold to be occurred within, to trigger the alert | string | must be one of the following options: [‘5Min‘, 10Min’, ‘20Min’, ‘30Min’, ‘1H’, ‘2H’, ‘3H’, ‘4H’, ‘6H’, ‘12H’, ‘24H’] For ‘new_value’ alerts you should send one of the following options: [‘12H’, ‘24H’, ‘48H’, ‘72H’, ‘1W’, ‘1M’, ‘2M’, ‘3M’] For ‘relative_time’ alerts you should send one of the following options: [‘HOUR’, ‘DAY’] |
condition.relative_timeframe | timeframe to compare with (relevant on with ‘Time Relative Alers’ | string | string must be one of the following options: [‘HOUR’, ‘DAY’, ‘WEEK’, ‘MONTH’] |
condition.group_by | The field to ‘group by’ on | string or array | The name of the key, if nested then specify the full path. Relevant only for: ‘more than’, ‘more than usual’, ‘new value’, ‘unique count’, and ‘metric’ alerts. Using an array for the group_by property will aggregate by multiple labels. For ‘new_value’ alerts it must contain a value |
condition.group_by_lvl2 | The subsequent field to ‘group by’ on | string | The name of the key, if nested then specify the full path. Relevant only for: ‘more than’ alert. |
condition.metric_field | The name of the metric field to alert on | string | Required for Metric alert type only |
condition.metric_source | The source of the metric. Either ‘logs2metrics’ or ‘prometheus’ | string | Required for Metric alert type only |
condition.arithmetic_operator | 0 – avg, 1 – min, 2 – max, 3 – sum, 4 – count, 5 – percentile (for percentile you need to supply the requested percentile in arithmetic_operator_modifier) | number | Required for Metric alert type only |
condition.arithmetic_operator_modifier | for percentile(5) arithmetic_operator you need to supply the value in this property | number | Required for Metric alert type only |
condition.sample_threshold_percentage | The metric value must cross the threshold within this percentage of the timeframe (sum and count arithmetic operators do not use this parameter since they aggregate over the entire requested timeframe), increments of 10, 0<=value<=90 | number | Required for Metric alert type only |
condition.non_null_percentage | the minimum percentage of the timeframe that should have values for this alert to trigger. increments of 10, 0<=value<=100 | number | Required for Metric alert type only |
condition.swap_null_values | If set to true, missing data will be considered as 0, otherwise, it will not be considered at all | boolean | Required for Metric alert type only |
notifications | An object that specifies which notifications channels to use when an alert was triggered | object | Each property of that object is described below |
notifications.emails | An array of email address to notify when the alert was triggered | array | |
notifications.integrations | An array of integration channels to notify when the alert was triggered | array | Each item in the array is the alias name of the integration |
notify_every | the supress time for the Alert | number (in seconds) | By default, when creating an Alert through the UI or with the API, notify_every will be populated with 60(sec) for immediate, more and more than alerts. For less than alert it will be populated with the chosen time frame for the less condition (in seconds). You may choose to change the suppress window so the alert will be suppressed for a longer period. |
description | alert description | string | |
notif_payload_filter | An array that contains log fields out of the log example that we want to be included with the alert notification | array | |
meta_labels | An array can contain objects with structure {“key”: “MY_KEY”, “value”: “MY_VALUE”}, or ‘:’ delimited strings with pattern “MY_KEY:MY_VALUE”. | array | ‘key’ must be only letters or digits, but can include ‘_’, or ‘-’. both ‘key’ and ’value must be no longer than 255 |
log_filter.tracing.fieldFilters | An array that contain filter for Application, Subsystem, serviceName | log_filter.tracing.fieldFilters:[{“field”:”field“},”filters”:[“fields”:[“value1″,”value2″,…],”operator”:”operator“]] options for “field” (meta fields): “field”:applicationName”, “subsystemName”,”serviceName” Options for “Operator” field: “operator”:”equal”,”startsWith”,”contain”,”endsWith” | |
log_filter.tracing.tagFilters | An array that contains filters for tags | array | log_filter.tracing.tagFilters:[{“field”:”tag to monitor”},”filters”:[“fields”:[“value1″,”value2″,…],”operator”:”operator*”]] Options for “Operator” field: “operator”:”equal”,”startsWith”,”contain”,”endsWith” |
log_filter.tracing.conditionLatency | Numeric value for latency (will monitor value above the parameter set) | number | |
cleanup_deadman_duration | This parameter sets a value for Advanced “Auto retire values” parameter | string or null | This is an optional parameter only relevant for Standard Alerts with the condition Less-than with GroupBy The value should be one of the following: [null, ‘5min’, ’10min’, ‘1h’, ‘2h’, ‘6h’, ’12h’, ’24h’] Create alert scenario: When omitted from the alert payload, the value defaults to null (Auto retire is disabled) Update alert scenario: When omitted from the alert payload, the previous value of this parameter will be retained |
Note:
Parameter | Description | Type | Note |
show_in_insight.retriggeringPeriodSeconds | the “Retriggering” period for the alert to be shown in the coralogix insights screen | number [Optional] | |
show_in_insight.notifyOn | The “Notify on” parameter is used to define if we want to show the “resolve” event in the insight screen | enum[Optional] | “triggered_only” / “triggered_and_resolved” |
notification_groups[].groupByFields | A separate notification will be sent based on the “GroupByFields” labels (must be a sub-group of the alert group by labels) | array[Optional] | |
notification_groups[].notifications[].retriggeringPeriodSeconds | the “Retriggering” period for the specific notification target setting in this notification group | number[Optional] | |
notification_groups[].notifications[].notifyOn | The “Notify on” for the specific notification target setting in this notification group | enum[Optional] | “triggered_only” / “triggered_and_resolved” |
notification_groups[].notifications[].integrationId | The IntegrationId ( target) for the specific notification target setting in this notification group ( the same notification target settings cannot have emailRecipients and IntegrationId ( only oneOf ) ) | number | |
notification_groups[].notifications[].emailRecipients | The email recipients ( target) for the specific notification target setting in this notification group ( the same notification target settings cannot have emailRecipients and IntegrationId ( only oneOf ) ) | array |
This is an example payload for alert creation via the API. The response of that request returns an alert_id. You can save this id in case you want to update the alert later on.
{ "name": "Security Alert", "severity": "info", "is_active": true, "log_filter": { "text": "authentication failed", "category": null, "filter_type": "text", "severity": ["error", "critical"], "application_name": ["production"], "subsystem_name": ["my-app","my-service"], "computer_name": null, "class_name": null, "ip_address": null, "method_name": null }, "condition": { "condition_type": "more_than", "threshold": 100, "timeframe": "10MIN", "group_by": "host" }, "show_in_insight": { "retriggeringPeriodSeconds": 120, "notifyOn":"triggered_and_resolved", }, "notification_groups": [{ "groupByFields": ["host"], "notifications":[ { "retriggeringPeriodSeconds": 120, "notifyOn":"triggered_and_resolved", "emailRecipients": ["[email protected]"] }, { "retriggeringPeriodSeconds": 240, "notifyOn":"triggered_only", "intergrationId": 1234512 }] },{ "groupByFields": [], "notifications":[ { "retriggeringPeriodSeconds": 240, "notifyOn":"triggered_and_resolved", "emailRecipients":["[email protected]"] }] }], "description": "", "active_when": { "timeframes": [{ "days_of_week": [ 1, 3, 4, 5, 2 ], "activity_ends": "10:00:59", "activity_starts": "05:00:00" }] }, "notif_payload_filter": [ "message" ] }
201 Created { "status": "success", "message": "Alert created successfully", "alert_id": [ "261e2741-8437-4397-926f-02d7390622a7" ], "unique_identifier": [ "ba4c6eae-c3d9-4bb3-a670-faf35a488413" ] }
400 Bad Request { "status": "invalid alert", "message": "Non valid value was received for field", "errors": [ "err_reason1", "err_reason2" ] "warnings": [] }
403 Forbidden { "status": "alerts limit exceeded", "message": "Company was reached the maximum alerts it can produce", "limit": "company’s alerts limit" //Usually it's 500 }
Notes:
Parameter | Description | Type | Note |
---|---|---|---|
id | The UUID that identifies the alert on the system | string | Mandatory if there is not the “unique_identifier” field |
unique_identifier | An alert unique identifier | string | Mandatory if there is not the “id” field |
Alert fields | You can add to the request every field that you want to update. See the body params of the POST request above. Note, if you are updating an alert and you set ‘condition.group_by’ to be empty string or null, it will update the alert and remove the ‘group by’ field if exists. |
Note: If you are updating one of the following fields, a new Alert is created behind the scene and the id of the alert thus changes. You should consider that when performing updates for your alerts via the API as the id is mandatory in PUT requests. You can also use unique_identifier instead which is not changed during any operation.
log_filter.severity log_filter.text log_filter.application_name log_filter.subsystem_name condition.group_by condition.threshold condition.timeframe condition.condition_type notify_every
This is an example payload for updating an alert via the API. In the case below, we just want to update the name of the alert.
{ "id": "c892ecf7-ee83-4484-9682-266d351be918", "name": "New Name!" }
201 Ok { "status": "success", "message": "Alert updated successfully", "alert_id": "c892ecf7-ee83-4484-9682-266d351be918", "unique_identifier": "ba4c6eae-c3d9-4bb3-a670-faf35a488413" }
400 Bad Request { "status": "invalid alert", "message": "Non valid value was received for field", "errors": [ "err_reason1", "err_reason2" ], "warnings": [] }
404 Not Found { "status": "alert not found", "message": "Failed to update alert" }
This is an example payload for updating an alert via the API. In the case below, we want to update the condition timeframe for our more than Alert. We updated from 5MIN to half an hour. As described in the prior note, this will generate a new alert and the response will reveal to us the new alert id for future use but the unique_identifier is still the same.
{ "id": "c892ecf7-ee83-4484-9682-266d351be918", "condition": { "timeframe": "30MIN" } }
201 Ok { "status": "success", "message": "Alert updated successfully", "alert_id": "bb56345b-22be-4570-ba08-3320a8821a69", "unique_identifier": "ba4c6eae-c3d9-4bb3-a670-faf35a488413" }
400 Bad Request { "status": "invalid alert", "message": "Non valid value was received for field", "errors": [ "err_reason1", "err_reason2" ], "warnings": [] }
404 Not Found { "status": "alert not found", "message": "Failed to update alert" }
One of those two IDs are required:
Parameter | Description | Type | Note |
---|---|---|---|
id | The UUID that identifies the alert on the system | string | Mandatory if there is not the “unique_identifier” field |
unique_identifier | An alert unique identifier | string | Mandatory if there is not the “id” field |
200 Ok { "status": "success", "message": "Alert deleted successfully" }
400 Bad Request { "status": "invalid id", "message": "Non valid value was received for field", "errors": [ "id => \"bb56345b-22be-4570-ba08-3320a8821accc\" is not a valid uuid" ], "warnings": [] }
404 Not Found { "status": "alert not found", "message": "Failed to delete alert" }
Parameter | Description | Type | Note |
---|---|---|---|
applicationName | Will check if it contained in the alert’s log_filter.application_name array | string (case sensitive) | |
subsystemName | Will check if it contained in the alert’s log_filter.subsystem_name array | string (case sensitive) | |
severity | Query by alert’s severity | string | Must be one of the following options: [“info”, “warning”, “critical”] |
fromTimestamp | Query all alerts that have been created from a specific timestamp until now | string – ISO format “2020-06-07T17:30:00.000Z” | |
id | Query by alert’s id | string | |
uniqueIdentifier | Query by alert’s unique_identifier | string |
Note: Pass the query parameters for ‘GET’ request through its URL in the following manner:
<Base API Endpoint>?severity=<severity>&applicationName=<application_name>&subsystemName=<subsystem_name>&fromTimestamp=<YYYY-MM-DDThh:mm:ss.000Z
<Base API Endpoint>?id=<id>
<Base API Endpoint>?uniqueIdentifier=<unique_identifier>
Examples:
https://api.coralogix.com/api/v1/external/alerts?severity=warning&applicationName=metro-prod&subsystemName=metro-web&fromTimestamp=2020-06-07T17:30:00.000Z
https://api.coralogix.us/api/v1/external/alerts?id=f3a461d8-21b1-4049-9dd1-9bcdd6b7ad90
https://api.coralogix.us/api/v1/external/alerts?uniqueIdentifier=cecd5e10-a116-11eb-94da-dfa0f95b7fed
This URL will get in response all the alerts with ‘warning’ severity (alert level not log level), application metro-prod, subsystem metro-web, and were created after 17:30 on June 7th. The fromTimestamp should be a string in ISO format, you don’t have to specify the hour, date like &fromTimestamp=2020-06-07 will suffice to get all alerts that were created since June 7th 2020.
If you use none you will get in response all of your alerts.
200 Ok { "total": results_count, //number "message": [alerts..] //array }
400 Bad Request { "status": "invalid query param", "message": "Non valid value was received for field", "errors": [ "err_reason1", "err_reason2" ] }
Parameter | Description | Type | Note |
---|---|---|---|
id | Query by alert’s id | string |
Note: Pass the query parameters for ‘GET’ request through its URL in the following manner:
<Base API Endpoint>/alert-status/<id>
Example:
https://api.coralogix.com/api/v1/external/alerts/alert-status/0b3d49a2-5291-417e-a30c-83633ce0b35e
The response for this URL will include the current status of your alert.
Parameter | Description | Type |
---|---|---|
groups | Array of permutation data | ARRAY |
group | The fields-values map | OBJECT <String, String> |
last_triggered_timestamp | The last time the alert was triggered | ISODATE |
status | The current status can be resolved/triggered | STRING |
{ "groups": [ { "group": { "serviceName": "frontend" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "productcatalogservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "shippingservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "adservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "paymentservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "cartservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "checkoutservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "currencyservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "loadgenerator" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "featureflagservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "recommendationservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "quoteservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" }, { "group": { "serviceName": "emailservice" }, "last_triggered_timestamp": "2023-02-27T08:29:00.000Z", "status": "triggered" } ] }
In case you opened a new Coralogix team and you want to import all of your alerts from another team, first GET all the alerts from the first Coralogix team. In response, you will receive a JSON that includes all of your alerts. Copy it. Second, you will need to create a POST request (don’t forget to also change the API key to a key you generated from the new Coralogix team you opened) and paste the response from the former action as your POST request body. You will also need to change the URL for the request and add /bulk, e.g. https://api.coralogix.com/api/v1/external/alerts/bulk.
How to run a complex query inside the log_filter.text field
Run a complex query, use / before and after your text.
Example: define an alert on logs from your production with status codes 5xx not originating from west-europe or west-us, use this expression:
/environment:production AND status.numeric:[500 TO 599] NOT region:/west-(europe|us)-[0-9]+//
Need help?
Our world-class customer success team is available 24/7 to walk you through your setup and answer any questions that may come up.
Feel free to reach out to us via our in-app chat or by sending us an email at [email protected].