Log parsing rules

Log parsing rules provide you the ability to parse, extract, map, convert and filter your log entries. Rules can help you convert unstructured log data into JSON format, extract important information from your logs and filter your log stream according to various conditions.

One of the most popular ways to customize your logs is using named groups Regex. This will allow you to modify and extract your logs in various ways For more information on named groups Regex

Access your log parsing rules interface by opening your admin user dashboard -> Settings -> Rules.

open coralogix settings

open coralogix parsing rules


1) Add a new group and name it.

Add log parsing rules group coralogix

2) Add a new rule

add rule coralogix

3) Select the rule type that you want to create

select rule type in coralogix

4) Create a Regex according to your needs.

5) Approve the rule you created (note that you can see how it affects your data in real time under the “preview” pane)

Important rule groups logic:

  • Rules run according to their order inside the group, once the first rule within a group is matched, the engine moves to the next group.
  • Block/Allow rules need to be configured 1 per group in order for them to run properly. 

Examples of how you can leverage this feature to your benefit:

1) Extract information from a log message

Raw log text: INFO myclass: This is a test message

coralogix log parsing rules extract

The result is – “INFO” value will be extracted to severity column, “myclass” to category column and the rest will go to the text column.

Note: Data can be extracted into pre-defined Coralogix keys (severity, category, className, methodName, threadId) and override the existing content with the extracted string, or data can be extracted into a new custom key of your own choosing, custom keys will be added to your main JSON payload under Text column.

2) Extract information from a JSON log message

Say we have a JSON format log messages with a field called http_status_code that represents the log severity, we can extract that field value to Coralogix pre-defined severity field. This is how it is done.

coralogix log parsing rules json extract

3) Convert log message to JSON

Raw log text: result: 200, status: OK, username: anonymous

coralogix log parsing rules parse to json

The result is that instead of a raw text log, a JSON log will be presented:


“result” : 200,

“status” : ‘OK’,

“username” : “anonymous”


4) Replace any predefined item

An often use case for a replace rule is to fix unwanted mapping exceptions in Elasticsearch. Generally speaking, mapping exceptions are caused when values of two different data types are trying to populate the same JSON key (the same field in Elasticsearch’s mapping). Coralogix automatically handles Numeric Vs. String mapping exceptions which are the most common type of exception. What you can do in order to prevent the second common type which is Object Vs String mapping exception, is to use a “Replace” rule to rout different data types into different JSON keys (or Elastic fields).

How can I do that?

Add a replace rule and name it. Define a regex rule to capture the abnormal occurrence of a specific log (log arrived with – “username”: {“name”: “John”, “type”: “Admin”} as part of the text payload instead of “username”: “John”) and rename the abnormal part with a new one in the ‘Replace to’ spot (in our example it will be “username_object”: { ).

This type of rule will create a new JSON key called “username_object” and rout all object values into that field while sending string values to the field “username”, meaning objects and strings will be separated and all your logs will be parsed and mapped into their different keys and values. Now you can use our layout & filters capabilities and all the benefits of Kibana.

coralogix log parsing rules replace

5) Allow filter based on a regex rule

Raw log text: result: 200, status: OK, username: anonymous

coralogix log parsing rules allow

6) Block filter based on a regex rule

Raw log text: result: 200, status: OK, username: anonymous


Important notes on Block rules:

  • Each block rule should be set in its own group. One block rule per group.
  • If you want to block a log on the basis of two parameters or more within it (E.g. if we want to block logs from a specific environment and specific subsystem), you should consider all permutations of your logs in case they are JSON structured logs.
  • With block rules, the best practice is to place them on the top of your rules library as rules run according to their group’s order, thus the block rule Regex can be easily applied on the logs raw templates before any other kind of parsing is performed.
  • Block logs are counted for only 5% of their quota, this cost covers network and data processing costs. In rare occasions where you block over 4 times your plan, the blocked logs will be counted as 10%.

You can activate or de-activate your rules through the UI or you can do it via script using our Rules-API. For more information, review the Rules-API tutorial.

For expanding your knowledge about rules, we recommend reading these following posts:
Rules cheat sheet – Examples of the use of rules and troubleshooting tips.
Instantly Parse The Top 12 Log Types – Blog post with examples of how you can parse easily the most common log types.


Soft Block Logs


The checkbox inside Block rules allows you to choose to “soft block” your logs, which will save you 70% of the blocked log’s quota cost while still allowing you to query the blocked logs in realtime and/or archive them in your S3 bucket.

Start solving your production issues faster

Let's talk about how Coralogix can help you better understand your logs

Managed, Scaled and Compliant ELK Stack

No credit card required

Get a personalized demo

Jump on a call with one of our experts and get a live personalized demonstration