Our next-gen architecture is built to help you make sense of your ever-growing data Watch a 4-min demo video!

Back to All Docs

Logs2Metrics Logs2Metrics

Last Updated: Mar. 01, 2023

Nowadays, as efficiency is the name of the game, companies that store great amounts of data are keeping a mindset of how to optimize data stores –raising the question of what data is more important to have available and what is less?

Coralogix Logs2Metrics enables you to generate metrics from your log data to optimize storage without sacrificing important data. You simply define a query and Coralogix will execute it every minute and store different data aggregations in a long-term index. Metrics start to gather from the point in time in which they were defined. The available query time range for your Logs2Metrics indices is 90 days.

Activating Logs2Metrics allows you to create up to 30 metrics with a 12 months retention period.


  1. In Coralgix, Go to the Data Flow tab –> Logs2Metrics section and click on the New Metric button.
  2. Define your metric:
    Metric Name
    – The name you will choose will be the name of the field representing this metric in the long-term index and will be used in Kibana visualizations.
    Metric Description – Describe your Metric.
    Search Query – Use a written text query or a Lucene type of query (similar to Logs/alerts queries). For example,  coralogix.metadata.subsystemName:nginx* will store the count of logs from any subsystem that starts with Nginx every minute which would allow long-term analysis on your Nginx subsystem. You can also set filters on applications, subsystems, and severities. This is just an example. The query section can have anything you want to have for instance you can put status.numeric:[500 TO 599] which will store data about status code 5xx.
  3. Metric Fields” (optional) –
    Define the fields the metrics data will be collected for, you can define up to 10 fields. These fields will be available in Kibana to define and chart the metrics.

    In the above example we defined a metric field for our log2metrics. The metric field name is response_time. Coralogix will collect aggregated metrics values for the field response_time and will keep them under the following field names, within the log2metrics index:
    Another field that will be created is metrics.response_time.value. It holds an array of the actual response_time values from the logs aggregated by the defined log2metric.
  4. Labels” (Optional) –
    Define the Logs2Metric labels, these are the Kibana buckets that can be used with the metric data. By default, you can create up to 6 labels, contact support for more if needed.

    L2M Labels
    You may use any mapped field as a label, depending on what you are trying to do. Also, you can give the labels any name you may like as long as it makes sense to you.


When creating a Logs2Metrics that expects a list of labels, logs that do not include some of the labels are converted into metric documents that only have a subset of the expected labels. While this behavior is not an issue in Kibana. When using PromQL with Grafana variables , there is no option to retrieve metrics with a subset of the labels, returning only a partial subset of the results.

See screen shot below.

This behavior has been rectified by populating the missing labels with data and have the correct results showing.

The “Default Metric

With every L2M created in Coralogix, regardless to specific “Metric Fields” and “Labels” you may define, a default metric is created. This metric will be enabled with the L2M creation in Grafana and when utilizing PromQL. The “Default metric” label will be structured as “<The Metric Name>_cx_docs_total” e.g.: The L2M name is: “Status_500”, then the “Default Metric” name will automatically set as: “Status_500_cx_docs_total”.


A “Count” metric is enabled for tracking the number of logs matches the L2M filters,

i.e.: in Grafana, you can explore the number of logs that meet the L2M conditions such as:

– Query

– Applications

– Subsystems

– Severities

You may also utilize PromQL (while creating a “Metric Alert”) and query the “Default Metric”,

e.g.: “sum(Status_500_cx_docs_total)”.

Metrics Permutations limit

A Logs2metric labels permutations (the unique combination of each of the labels values) is a finite number and it is defined by the user per Metric (at the bottom of the metric definition). By default, it is set to 30,000 permutations and you may choose different max permutations value per Metric, while the maximum permutation per account is 1,000,000.

At the top of your defined Metrics list, you will see how many available permutations you have left for your disposal.

Normally, a metric document will show the count of logs per unique permutation of the chosen labels. If you encounter a metric that shows under permutations amount an exclamation mark near the number of allocated permutations it means you have reached the permutations limit. You should adjust the permutation allocation to accommodate all possible permutations, otherwise, the metric document will contain an aggregated log count under a CoralogixOtherValues bucket.

Kibana Usage

Once you create log2metrics you want to use it. make visualizations, graphs or tables, or anything you like.

  1. In Kibana –> Management –> Index Patterns you can see the newly created long-term index for the aggregated metric data.
    logs to metrics coralogix
  2. Click on it to see the fields in your new Metrics index (you should expect to see fields with your metrics and labels names). If you are not seeing all the expected fields, refresh your index pattern to show the most updated fields list.

    logs to metrics coralogix

Creating visualizations and dashboards for log2metrics follows the same procedure as creating any Kibana visualizations or dashboards. Make sure to select log2metrics index which is the index that always ends with _log_metrics.

Under Kibana click on visualization.

Select *:33_log_metrics* index

The visualization below shows the different status code over time

The log2metrics index includes logs that hold aggregation about your application and infrastructure logs. This affects the type of Kibana metrics you will use when creating log2metrics visualizations.

  1. To view only the data that is created from a specific log2metric, please add in the query the name of the metric “<metric_name>” (“status code” in our example).
  2. Should you want to count the amount of events for a specific label, please use the SUM metric on the field docsCount.
  3. For Kibana buckets Date histogram aggregation use timestamp instead of Coralogix.timestamp.
  4. For the bucket, we use labels.reponse_code.value which is bringing its info from the response_code field.
  5. Kibana Min and Max will work as expected. Since the Min and Max of the aggregated logs are the same as the ones for the original logs.
  • Notice that we used labels.response_code.value as one of the buckets. you cannot use this bucket if it does not belong to the same log2metrics that we used to filter the data.
  • If we did not define a label(bucket) under the log2metrics for the metric “status code” we won’t be able to use this bucket.
  • Make sure the buckets you are using exists under the log2metrics metric you are trying to filter by.
  • Should you want to view the average of a specific metric use the Average aggregation on the field metrics.<metric_name>.value

As I mentioned above the query section can have whatever query you want to put there. Below are some examples.

You can create any metric using complex queries based on your log data. These are a few examples of common use cases:

  • Enable long-term analysis on your API stability: Store the amount of 5XX responses on your prod server.
    • Query – status.numeric:[500 TO *] AND env:production
  • Discover business trends: Store the number of successful purchases on your website.
    • Query – message: “user completed purchase successfully”
  • Discover trends in your application quality: Store the number of exceptions you have.
    • Query – message:Exception and severity:ERROR
  • Identify a trend and possible attacks: Store the amount of NXDOMAIN responses by your DNS resolver.
    • Query – message:NXDOMAIN

Have any questions? check our website and in-app chat for quick help.

On this page