Log parsing rules

Log parsing rules provide you the ability to parse, extract, map, convert and filter your log entries. Rules can help you convert unstructured log data into JSON format, extract important information from your logs and filter your log stream according to various conditions.

One of the most popular ways to customize your logs is using named groups Regex. This will allow you to modify and extract your logs in various ways For more information on named groups Regex

Access your log parsing rules interface by opening your admin user dashboard -> Settings -> Rules.

open coralogix settings

open coralogix parsing rules

Then:

1) Add a new group and name it.

Add log parsing rules group coralogix

2) Add a new rule

add rule coralogix

3) Select the rule type that you want to create

select rule type in coralogix

4) Create a Regex according to your needs.

5) Approve the rule you created (note that you can see how it affects your data in real time under the “preview” pane)

Important rule groups logic:

  • Rules run according to their order inside the group, once the first rule within a group is matched, the engine moves to the next group.
  • Block/Allow rules need to be configured 1 per group in order for them to run properly. 

Examples on how you can leverage this feature to your benefit:

1) Extract information from a log message

Raw log text: INFO myclass: This is a test message

coralogix log parsing rules extract

The result is – “INFO” value will be extracted to severity column, “myclass” to category column and the rest will go to the text column.

2) Extract information from a JSON log message

Say we have a JSON format log messages with a field called http_status_code that represents the log severity, we can extract that field value to coralogix pre-defined severity field. This is how it is done.

coralogix log parsing rules json extract

3) Convert log message to JSON

Raw log text: result: 200, status: OK, username: anonymous

coralogix log parsing rules parse to json

The result is that instead of a raw text log, a JSON log will be presented:

{

“result” : 200,

“status” : ‘OK’,

“username” : “anonymous”

}

4) Replace any predefined item

An often use case for a replace rule is to fix unwanted mapping exceptions in Elasticsearch. Generally speaking, mapping exceptions are caused when values of two different data types are trying to populate the same JSON key (the same field in Elasticsearch’s mapping). Coralogix automatically handles Numeric Vs. String mapping exceptions which are the most common type of exception. What you can do in order to prevent the second common type which is Object Vs String mapping exception, is to use a “Replace” rule to rout different data types into different JSON keys (or Elastic fields).

How can I do that?

Add a replace rule and name it. Define a regex rule to capture the abnormal occurrence of a specific log (log arrived with – “username”: {“name”: “John”, “type”: “Admin”} as part of the text payload instead of “username”: “John”) and rename the abnormal part with a new one in the ‘Replace to’ spot (in our example it will be “username_object”: { ).

This type of rule will create a new JSON key called “username_object” and rout all object values into that field while sending string values to the field “username”, meaning objects and strings will be separated and all your logs will be parsed and mapped into their different keys and values. Now you can use our layout & filters capabilities and all the benefits of Kibana.

coralogix log parsing rules replace

5) Allow filter based on a regex rule

Raw log text: result: 200, status: OK, username: anonymous

coralogix log parsing rules allow

6) Block filter based on a regex rule

Raw log text: result: 200, status: OK, username: anonymous

coralogix log parsing rules block

Signup to Coralogix
WordPress Lightbox