Logs2Metrics

Nowadays, as efficiency is the name of the game, companies that store great amounts of data are keeping a mindset of how to optimize data stores –raising the question of what data is more important to have available and what is less?

Coralogix Logs2Metrics enables you to generate metrics from your log data to optimize storage without sacrificing important data. You simply define a query and Coralogix will execute it every minute and store different data aggregations in a long-term index. Metrics start to gather from the point in time in which they were defined. The available query time range for your Logs2Metrics indices is 30 days.

By activating Logs2Metrics, 5% of your daily quota will be used allowing you to create up to 30 metrics with a 90 Days retention period.

Guide

  1. In Coralgix, Go to the TCO tab –> Logs2Metrics section and click on the New Metric button.
  2. Define your metric:
    Metric Name – The name you will choose will be the name of the field representing this metric in the long-term index and will be used in Kibana visualizations.
    Metric Description – Describe your Metric.
    Search Query – Use a written text query or a Lucene type of query (similar to Logs/alerts queries). For example,  coralogix.metadata.subsystemName:nginx* will store the count of logs from any subsystem that starts with Nginx every minute which would allow long-term analysis on your Nginx subsystem. You can also set filters on applications, subsystems, and severities. This is just an example. The query section can have anything you want to have for instance you can put status.numeric:[500 TO 599] which will store data about status code 5xx.

  3. Define the fields the metrics data will be collected for, you can define up to 10 fields. These fields will be available in Kibana to define and chart the metrics.

    In the above example we defined a metric field for our log2metrics. The metric field name is response_time. Coralogix will collect aggregated metrics values for the field response_time and will keep them under the following field names, within the log2metrics index:
    metrics.response_time.avg
    metrics.response_time.count
    metrics.response_time.max
    metrics.response_time.min
    metrics.response_time.sum
    Another field that will be created is metrics.response_time.value. It holds an array of the actual response_time values from the logs aggregated by the defined log2metric.
  4. Define the Logs2Metric labels, these are the Kibana buckets that can be used with the metric data. By default, you can create up to 3 labels, contact support for more if needed.

    You may use any mapped field as a label, depending on what you are trying to do. Also, you can give the labels any name you may like as long as it makes sense to you.

Metrics Permutations limit

A Logs2metric labels permutations (the unique combination of each of the labels values) is a finite number and it is defined by the user per Metric (at the bottom of the metric definition). By default, it is set to 30,000 permutations and you may choose different max permutations value per Metric, while the maximum permutation per account is 1,000,000.

At the top of your defined Metrics list, you will see how many available permutations you have left for your disposal.

Normally, a metric document will show the count of logs per unique permutation of the chosen labels. If you encounter a metric that shows under permutations amount an exclamation mark near the number of allocated permutations it means you have reached the permutations limit. You should adjust the permutation allocation to accommodate all possible permutations, otherwise, the metric document will contain an aggregated log count under a CoralogixOtherValues bucket.

Kibana Usage

Once you create log2metrics you want to use it. make visualizations, graphs or tables, or anything you like.

  1. In Kibana –> Management –> Index Patterns you can see the newly created long-term index for the aggregated metric data.
    coralogix logs to metrics tutorial kibana index
  2. Click on it to see the fields in your new Metrics index (you should expect to see fields with your metrics and labels names). If you are not seeing all the expected fields, refresh your index pattern to show the most updated fields list.
    coralogix logs 2 metrics tutorial kibana fields list

Creating visualizations and dashboards for log2metrics follows the same procedure as creating any Kibana visualizations or dashboards. Make sure to select log2metrics index which is the index that always ends with _log_metrics.

Under Kibana click on visualization.

Select *:33_log_metrics* index

The visualization below shows the different status code over time

 

The log2metrics index includes logs that hold aggregation about your application and infrastructure logs. This affects the type of Kibana metrics you will use when creating log2metrics visualizations.

  1. In the filter above for this visualization, I am using the name of the metric itself between “”. if you intend to use the metric name to filer the presentation of your graph you want to put the log2metrics name between quotes.  For best practice use the name: “name of the metric”.
  2. Kibana Count for log2metrics visualizations is substituted with the Kibana SUM metrics which should be applied to docsCount field.
  3. For Kibana buckets Date histogram aggregation use timestamp instead of Coralogix.timestamp.
  4. For the bucket, we use labels.reponse_code.value which is bringing its info from the response_code field.
  5. Kibana Min and Max will work as expected. Since the Min and Max of the aggregated logs are the same as the ones for the original logs.
  • Notice that we used labels.response_code.value as one of the buckets. you cannot use this bucket if it does not belong to the same log2metrics that we used to filter the data.
  • If we did not define a label(bucket) under the log2metrics for the metric “status code” we won’t be able to use this bucket.
  • Make sure the buckets you are using exists under the log2metrics metric you are trying to filter by.

As I mentioned above the query section can have whatever query you want to put there. Below are some examples.

You can create any metric using complex queries based on your log data. These are a few examples of common use cases:

  • Enable long-term analysis on your API stability: Store the amount of 5XX responses on your prod server.
    • Query – status.numeric:[500 TO *] AND env:production
  • Discover business trends: Store the number of successful purchases on your website.
    • Query – message: “user completed purchase successfully”
  • Discover trends in your application quality: Store the number of exceptions you have.
    • Query – message:Exception and severity:ERROR
  • Identify a trend and possible attacks: Store the amount of NXDOMAIN responses by your DNS resolver.
    • Query – message:NXDOMAIN

Have any questions? check our website and in-app chat for quick help.