Request Demo
Tutorials

Tutorials

Log Parsing Rules

Rules help you to process, parse, and restructure log data to prepare for monitoring and analysis. Doing so can extract information of importance, structure unstructured logs, discard unnecessary parts of the logs, mask fields for compliance reasons, fix misformatted logs, block log data from being ingested based on log content, and much more.  

Inside Coralogix, rules are organized inside Rule Groups. Each group has a name and a set of rules with a logical AND/OR relationship between them. Logs are processed according to the order of Rule Group (top to bottom) and then by the order of rules within the Rule Group and according to the logical operators between them (AND/OR).

See here for the Rules API tutorial.

Rules Groups

In order to create a rule group in the Coralogix UI, go to Settings->Rules and click on the ‘NEW RULE GROUP’ button or choose one of the quick rule creation options from the boxes below. The options for rules include:

 

The rule group definition form has a few sections:

Description

Each group has a name and can have an optional description.

Rule matcher

The Rule Matcher section defines a query. Only logs that match the Rule Matcher query will be processed by the group. This is important in making sure that only intended logs go through the group rules, as well as for performance reasons. 

The rule matcher query is defined by selecting a set of applications, subsystems, and severities. Only logs that fit all components of the query will be processed by the group. All entries in this section are optional. Not selecting a field or defining a RegEx means that the Rule Group will run on all of your logs

 

Rules

Here are the different rule types that can be created in Coralogix.

Parse

Parse Rule uses RegEx named capture groups. The groups become the parsed log fields and the value associated with each group becomes the field’s value. The RegEx doesn’t have to match the entire log, only the RegEx named capture groups (the values within the ‘< >’) and the value they capture will be part of the reconstructed log.

The following example takes an Heroku L/H type error log that is sent from Heroku as an unstructured log and converts it to JSON log.

The original log:

sock=client at=warning code=H27 desc="Client Request Interrupted" method=POST path="/submit/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=0ms status=499 bytes=0

The RegEx:

^(sock=)?(?P<sock>(\S*))\s*at=(?P<severity>\S*)\s*code=(?P<error_code>\S*)\s*desc="(?P<desc>[^"]*)"\s*method=(?P<method>\S*)\s*path="(?P<path>[^"]*)" host=(?P<host>\S*)\s* (request_id=)?(?P<request_id>\S*)\s*fwd="?(?P<fwd>[^"\s]*)"?\s*dyno=(?P<dyno>\S*)\s*connect=(?P<connect>\d*)(ms)?\s*service=(?P<service>\d*)(ms)?\s*status=(?P<status>\d*)\s* bytes=(?P<bytes>\S*)\s*(protocol=)?(?P<protocol>[^"\s]*)$

The resulting log:

{   
“sock”: “client”,   
“severity”:”warning”,   
“error_code”:”H27”,   
“desc”:”Client Request Interrupted",   
“method”:”POST”,   
“path”:”/submit/”,   
“host”:”myapp.herokuapp.com”,   
“request_id”:””,   
“fwd”:”17.17.17.17”,   
“dyno”:”web.1”,   
“connect”:”1”,   
“service”:”0”,   
“status”:”499”,   
“bytes”:”0”,   
“protocol”:”” 
}

 

Extract

Unlike a Parse Rule, an Extract Rule will leave the original log intact and will just add fields to it at the root. The following example extracts information from the field message and creates two new fields, ‘bytes’, and ‘status’ that can be queried and visualized in Coralogix.

The original log:

{"level":"INFO", "message": "200 bytes sent status is OK"}

The RegEx:

message"\s*:\s*"(?P<bytes>\d+)\s*.*?status\sis\s(?P<status>[^"]+)

The resulting log:

{

"level":"INFO", 

"message": "200 bytes sent status is OK",

“Bytes”:”200”,

“status”:”OK”

}

If the original log is unstructured it will add fields based on the named capture groups and will store the original log inside a field called text. Using the previous example, it will switch the original log to be unstructured.

"level":"INFO", "message": "200 bytes sent status is OK",

The resulting log:

{

"bytes" : "200" ,

"text" : "\"level\":\"INFO\", \"message\": \"200 bytes sent status is OK\"" ,

"status" : "OK"
}

 

JSON Extract

JSON Extract rules take the value of a key and uses it to overwrite one of Coralogix metadata fields. In this example, we extract the value from a field called ‘worker’ and use it to populate the Coralogix metadata field called Category.

The source field is always text and the JSON Key field should contain the field name that you want to extract the value from. The destination field is the metadata field that will be overwritten with this value. This rule is frequently used to extract and set severity and to set metadata fields like ‘Category’ that influences the classification algorithms. 

The original log:

{

“transaction_ID”:12543,

“worker”:”A23”,

“Message”:”success”

}

The log will not change in the Logs interface but the Coralogix metadata field ‘category’ will be populated with “A23” in the above example.

Replace

A common use case for a Replace rule is to repair misformatted JSON logs. In the following example, the JSON logs are sent with a date prefix which breaks the JSON format and turns them into unstructured logs. The following RegEx identifies the substring to replace in the log.

Original log:

2020-08-07 {“status”:”OK”, “user”:John Smith”, “ops”:”J1”}

RegEx:

.*{

The resulting log:

{“status”:”OK”, “user”:John Smith”, “ops”:”J1”}

Nested Fields

The following example shows how to use a Replace rule to rebuild a log with nested fields. Nested fields are a constraint in the Extract and Parse rules. 

The original log is:

{"ops":"G1","user":"John Smith-2125 Sierra Ventura Dr.-Sunnyvale-CA-94054","status":"305"}

The Regex:

(.*user"):"([^-]*)-([^-]*)-([^-]*)-([^-]*)-([^-]*)",([^$]*)

Each of the parentheses represents a capture group that can be addressed by $n n=1..7 in this case.

The replacement string:

$1:{\"name\":\"$2\",\"address\":\"$3\",\"city\":\"$4\",\"state\":\"$5\",\"zip\":\"$6\"},$7

This is the resulting log:

{
   "ops":"G1",
   "User":{
      "name":"John Smith",
      "address":"2125 Sierra Ventura Dr.",
      "city":"Sunnyvale",
      "state":"CA",
      "Zip":"94054"
   }"status":"305"
}

Block

Block rules allow you to filter out your incoming logs. Like with other rules the heart of the rule will be a RegEx, identifying the logs to be blocked (or allowed). In this example, all the logs that have the substring ‘sql_error_code 28000’ will be blocked.

RegEx:

sql_error_code=28000

Block rules have two additional options:

  • Block all matching logs: Will block any log that matches the Rule Matcher and Block Rule
  • Block all non-matching logs: Will block any log that does not match the Rule Matcher and Block Rule

Checking the “View blocked logs in Livetail” option will block the logs but archive them to S3 (if achieving is enabled under TCO > Archive) and the logs will be visible in LiveTail. This is a more refined option to give logs a low priority, as described here. Only 15% of low priority logs volume is counted against quota.

The block logic indicates if the rule will block all logs that do match the RegEx or the inverse; all logs that do not match the regex. In our example above, checking the “block all non-matching logs” option would have blocked all logs except those that include the string sql_error_code\s*=\s*28000 

Sample log

When creating a rule you can use the ‘sample log’ area of the screen to verify your rule. Simply create or paste a log into tis area and it will show you the results of the rule processing the log. In this example a parse rule processes the log in the “Sample log” area and the result is displayed in “Results” area:

 

Rule Group Logic

To add rules to a group following the creation of the first log you have to take two actions:

  • Select the Rule type from the “ADD RULE’ dropdown list

  • Select the logical relationship between the last rule and the new one (And/Or)

Example: Rule-1 AND Rule-2 will mean that a log will always be processed by both rules. Rule-1 OR Rule-2 means that the log will be processed by either Rule-1 or Rule-2, whichever matches first, or neither if none match the log. In other words, if Rule-1 is a match to the log, then Rule-2 will not be applied to it at all.

Rule Groups: Order of Execution

Logs are processed by the different Rule Groups according to the order of the groups. When “entering” a group the log is matched against the “Rule matcher” query first. If it matches, it will continue into the Group Rules. Within the group, the log will be matched against the Rules according to their order and logic. It will continue to be matched down the list of rules until it is matched by a rule that is followed by an OR logical operator. The OR and AND operators in the group do not follow the mathematical order of operations. Rules are applied instantly so that the output of one rule becomes the input of the next one.

Note: It is clear that the order of rules and groups is important and can affect a log’s processing outcome. 

As an example look at these two rules that parse Heroku Postgres logs:

Postgres follower, https://regex101.com/r/IyjCIj/4

Postgres leader, https://regex101.com/r/aQJsp5/2

The follower log has an extra entry at the end, follower_lag_commits. This means that the Leader rule will capture both logs because it is less restrictive and all other fields match. Follower will match only the follower logs (the first test string is not captured in the follower example because it doesn’t have the extra entry). This means that the follower rule would be executed first. 

Another consideration is performance. The best practice is to put Block rules first and to use the ‘Rule Matcher’ when possible. It will prevent unnecessary processing of logs and speed up your data processing.

You can change the order of execution of rules within a group by dragging the rule into a new position relative to other rules.

The same goes for Rule Groups. You can change their order by dragging them up and down the list.

Searching for Rule Groups

In order to quickly find a Rule Group of interest, you can use the search function to search for rules. You can use rules or group names in the free text search field.

Editing rules and groups

In order to edit a Rule Group or a Rule within a group, click on the group, make your changes, and click on ‘SAVE CHANGES’.

Start solving your production issues faster

Let's talk about how Coralogix can help you

Managed, scaled, and compliant monitoring, built for CI/CD

Get a demo

No credit card required

Get a personalized demo

Jump on a call with one of our experts and get a live personalized demonstration