Skip to content

Heroku Metric Logs

Our Heroku integration is configured to automatically detect inbound platform and custom metric logs, and parse them into JSON. To avoid confusion, we'll show some before and after examples, as well as explaining how to parse and generate metrics from these values, once they appear in your account.

Before & after

Heroku metric logs appear as key-value pairs, in Syslog format. Coralogix automatically extracts the Syslog metadata, and makes the body of the syslog message available.

The unparsed body of the log (without syslog metadata) appears like this:

dyno=heroku.7af55a6f-b5eb-46e7-b05c-36d3dc04ef22.5df262fa-c95f-4e06-a6c1-6d189b9781e9 source=web.1 sample#load_avg_1m=0.05 sample#load_avg_5m=0.03 sample#load_avg_15m=0.02

While this is readable, it is difficult to query each individual field without complex regex parsing. To make this easier for our Heroku customers, we have enabled JSON transformation for this incoming data, so that when it arrives in your Coralogix account, the above log body is transformed into this:

{"dyno":"heroku.7af55a6f-b5eb-46e7-b05c-36d3dc04ef22.5df262fa-c95f-4e06-a6c1-6d189b9781e9","sample#load_avg_15m":"0.02","sample#load_avg_1m":"0.05","sample#load_avg_5m":"0.03","source":"web.1"}

This means that, rather than extracting the values you need, customers can run direct queries on their incoming metric logs, for example, the following query can extract all metric samples where the number of active connections are greater than 0:

filter $d['sample#active-connections']:num > 0

Of course, this also opens up reporting for Heroku customers too, who can build queries that compute average over time, using DataPrime:

filter addon != null 
| groupby addon avg($d['sample#memory-percentage-used']:num) as avg_memory_utilization

I am not seeing JSON parsed metrics

This feature has been enabled for new users only, to ensure backwards compatibility with our existing Heroku users. Contact our support team via the in-app chat if you wish to move to the new format. In time, this will be rolled out for all customers.

Handling of units

Units are automatically extracted from the value of the metric, and suffixed into the key. For example, consider the following metric log, which comes from Heroku:

source=DATABASE addon=postgresql-animate-79954 ... sample#memory-total=3944456kB sample#memory-free=84080kB sample#memory-percentage-used=0.97868 sample#memory-cached=3194564kB

Notice that the value of sample#memory-total is 3944456kB. Our integration attempts to ensure that this is a calculable numeric value out of the box, so it converts this into sample#memory-total-kB with value 3944456. This means that it is natively available for conversion using Events2Metrics.

Note that any pattern after the numeric value is considered a unit, and will be handled in this way, for example: sample#my-metric=100foo will become {"sample#my-metric-foo": 100}. This will also preserve the unit casing.

This may cause edge cases if undesired special characters appear as units in your logs. In this case, you should use Parsing Rules to change the value. See below for an example of how to handle these edge cases using parsing rules.

Converting incoming metric logs into metrics

For long term retention, Coralogix works well for logs, but to maintain high performance over longer periods of time, conversion to metrics provides a low cost, simple method for constantly querying data over months. For example, if we consider the sample#active-connections field, we can define an Events2Metric for this, that will automatically generate a timeseries metric for this field:

A dialog showing the field sample#active-connections being extracted into a timeseries metric called postgres_active_connections

This practice is especially useful for users who wish to maintain dashboard performance over long periods of time for a series of important metrics. It is also important to label your metrics with useful dimensions on which you wish to query. For example, in the above example, we also extract a number of fields that function as labels:

A dialog showing the labels that have been added to the previous metric. The labels are source, addon & Application

This means that data can now be queried using PromQL in the Coralogix app, and the labels will be available on the metric. For example, the following query shows Postgres Read iOps over time, grouped by source & addon names:

avg(postgres_read_iops{postgres_addon=~'{{ postgres_addon }}',postgres_source=~'{{ postgres_source }}', postgres_application=~'{{ postgres_application }}'}) by (postgres_source, postgres_addon) or on() vector(0)

Custom application metrics

Custom application metrics that conform to the expected format will also be converted. That is, they should follow <metric-type>#<metric-name>=<value>. For example, counter#total_cpu_seconds=500. If you wish for your metrics to be available for conversion into Coralogix metrics using Events2Metrics, then the value MUST be numeric or decimal, and can not contain any additional values, like % symbols.

Handling unit edge cases with parsing rules

If we consider the following incoming log that we want to extract:

sample#my-metric=50%

This will be converted into JSON by our integration, so what arrives at your Coralogix account will look like this:

{
    "sample#my-metric-%": 50
}

In its current form, this is valid JSON and will work just fine, although you may wish to alter this to be more readable. To do this, open parsing rules:

Parsing rules under the Data Flow menu

From here, either add a rule to an existing rule group or select Replace from the rule options at the top of the page:

Replace Parsing rule

Then using the following Regular expression, we can replace the value with the desired suffix.

Replace Parsing rule

For easy editing, the regular expression is simple -% (the value we're looking to replace) and the replacement string is -perc, which is the replacement value. This will transform the above log into:

{
    "sample#my-metric-perc": 50
}