Compare metadata vs. data field results in log queries
TL;DR
Many logs contain a field like $d.severity
, but it doesn’t always represent the standard severity level (e.g., ERROR
, INFO
, CRITICAL
). It might indicate something else entirely — such as internal scoring, alerting status, or domain-specific meaning.
For log severity checks, you should always default to $m.severity
, which is normalized by Coralogix at ingest. It guarantees consistent severity levels across all sources.
Unless $d.severity
is deliberately used for a custom purpose in your schema, prefer $m.severity
for filtering, grouping, or analyzing severity levels.
Problem / Use Case
You're querying for logs with severity == ERROR
and getting surprisingly low results. This often happens when querying the $d
(user data) object instead of $m
(metadata), which is where severity is typically stored.
For example, this query looks correct but in our test data it only returned 55 results:
However, changing it to use metadata fields gives dramatically different results—nearly 10,000:
Query
This query compares the number of ERROR
logs using $m.severity
(the normalized, system-level severity) against $d.severity
(the raw field in the log body). It filters each side independently for logs in the last 24 hours, counts them, and joins the results side-by-side for easy comparison. Use this pattern to validate whether your logs rely on $d.severity
, and whether $m.severity
is safe to use as the default.
source logs
| filter $m.severity == ERROR && now() - $m.timestamp < 1d
| count into log_m_fields
| create dummy_key from 'comparison'
| join (
source logs
| filter $d.severity.toLowerCase() == 'error' && now() - $d.timestamp:timestamp < 1d
| count into log_d_fields
| create dummy_key from 'comparison'
)
on left=>dummy_key == right=>dummy_key
into copies
| choose copies.log_d_fields as log_d_fields, log_m_fields
Expected Output
A table comparing log counts per day:
log_d_fields | log_m_fields |
---|---|
55 | 9914 |
This output highlights how querying $m.severity
yields far more accurate results than $d.severity
.
Variations
- Change the
timestamp
granularity (/1h
,/1d
) to get finer comparisons. - Swap
join
forouter join
to show missing values on either side.