Nowadays, as efficiency is the name of the game, companies that store great amounts of data are keeping a mindset of how to optimize data stores –raising the question of what data is more important to have available and what is less?
Coralogix Logs2Metrics enables you to generate metrics from your log data to optimize storage without sacrificing important data. You simply define a query and Coralogix will execute it every minute and store different data aggregations in a long-term index. Metrics start to gather from the point in time in which they were defined. The available query time range for your Logs2Metrics indices is 30 days.
By activating Logs2Metrics, 5% of your daily quota will be used allowing you to create up to 30 metrics with a 90 Days retention period.
- In Coralgix, Go to the TCO tab –> Logs2Metrics section and click on the New Metric button.
- Define your metric:
- Metric Name – The name you will choose will be the name of the field representing this metric in the long term index and will be used in Kibana visualizations.
- Metric Description – Describe your Metric.
- Search Query – Use a written text query. For example, coralogix.metadata.subsystemName:nginx* will store all subsystem that starts with nginx every x minutes which would allow long term analysis on your Nginx subsystem. You can also set filters on applications, subsystems, and severities. This is just an example. The query section can have anything you want to have for instance you can put status:[500 TO 599] which will store data about status code 5xx.
- Define the fields the metrics data will be collected for. You can define up to 10 fields. These fields will be available in Kibana to define and chart the metrics.
In the above example we defined a metric field for our log2metrics. The metric field name is response_time. Coralogix will collect aggregated metrics values for the field response_time and will keep them under the following field names, within the log2metrics index:
Another field that will be created is metrics.response_time.value. It holds a large sample of the real values for the different ‘metric name’ fields.
- Define up to 3 ‘Metric labels’. These are the Kibana buckets that can be used with the metric data.
This is just an example you can use any labels you like depending on what you are trying to do. Also you can the labels any name you like as long as it makes sense to you.
Once you create log2metrics you want to use it. make visualizations, graphs or tables or anything you like.
- In Kibana –> Management –> Index Patterns you can see the newly created long term index for the aggregated metric data.
- Click on it to see the fields in your new Metrics index (you should expect to see fields with your metrics names). If you are not seeing all the expected fields, refresh your index pattern to show the most updated fields list.
Creating visualizations and dashboards for log2metrics follows the same procedure as creating any Kibana visualizations or dashboards. Make sure to select log2metrics index which is the index that always ends with (log_metrics).
Under Kibana click on visualization.
Select *:33_log_metrics* index
The visualization below shows the different status code over time
The log2metrics index includes logs that hold aggregation about your application and infrastructure logs. This affects the type of Kibana metrics you will use when creating log2metrics visualizations.
- In the filter above for this visualization I am using the name of the metric itself between “”. if you intend to use the metric name to filer the presentation of your graph you want to put the log2metrics name between quotes. For best practice use name:”name of the metric”.
- Kibana Count for log2metrics visualizations is substituted with the Kibana SUM metrics which should be applied to docsCount field.
- For Kibana buckets Date histogram aggregation use timestamp instead of Coralogix.timestamp.
- For the bucket we use labels.reponse_code.value which is bringing its info from response_code field.
- Kibana Min and Max will work as expected. Since the Min and Max of the aggregated logs are the same as the ones for the original logs.
- Notice that we used labels.response_code.value as one of the buckets. you cannot use this bucket if it does not belong to the same log2metrics that we used to filer the data.
- If we did not define a label(bucket) under the log2metrics for the metric “status code” we won’t be able to use this bucket.
- Make sure the buckets you are using exists under the log2metrics metric you are trying to filter by.
As I mentioned above the query section can have what ever query you want to put there. Below are some examples.
You can create any metric using complex queries based on your log data. These are a few examples of common use cases:
- Enable long term analysis on your API stability: Store the amount of 5XX responses on your prod server.
- Query – status.numeric:[500 TO *] AND env:production
- Discover business trends: Store the number of successful purchases on your website.
- Query – message:”user completed purchase successfully”
- Discover trends in your application quality: Store the number of exceptions you have.
- Query – message:Exception and severity:ERROR
- Identify a trend and possible attacks: Store the amount of NXDOMAIN responses by your DNS resolver.
- Query – message:NXDOMAIN
Have any questions? check our website and in-app chat for quick help.