Analyze log patterns with Loggregation
Use Loggregation to group high-volume logs into meaningful patterns so you can quickly identify new, rare, and abnormal errors, reduce noise during investigations, and operationalize error discovery with alerts and automation.
Use Loggregation to:
- Identify newly introduced and rare errors
- Reduce log noise without losing access to raw data
- Investigate error patterns across applications, subsystems, and infrastructure
- Automate detection of new error patterns after deployments
How Loggregation works
What a template represents
A template represents a recurring log pattern. Loggregation evaluates incoming logs and groups messages that share the same constant structure into a single template. Each template acts as a compact summary of many similar log entries, allowing you to reason about behavior at the pattern level instead of inspecting individual lines.
Templates are created and evaluated within a template branch, defined by application, subsystem, and severity. This scoping keeps patterns meaningful by grouping logs only within the same operational context.
How variables shape a pattern
Each log message is made up of constant text and variable values. The constant text defines the shape of the message, while variable values change between occurrences. Loggregation uses the constant parts to identify patterns and replaces the changing values with placeholders.
For example, the following log messages:
Are grouped into the pattern:
How Loggregation reflects the full log set
Templates represent the full set of logs in the time range you query, making them useful for detecting rare and newly introduced errors. When you investigate patterns, you can assume they represent the complete picture of the logs you queried.

Explore and interpret templates
In Explore, select the Templates tab to switch to the Loggregation view. The view updates as you change the query, filters, or time range.
Explore supports multiple ways to view your logs:
- Log view — Examine individual log entries in sequence.
- Template view — See recurring patterns and their frequency.
When working with structured logs, you can narrow visible fields in the JSON view to focus on data relevant to your investigation.
Use occurrence counts and ratios to assess impact
The patterns table shows each detected template along with:
| Column | Description |
|---|---|
| Pattern | The log template with variable placeholders |
| Count | Number of log entries matching this pattern |
| Ratio | Percentage of total logs represented by this pattern |
| New | This pattern did not appear in the previous comparable time window |
| Rare | This pattern appears significantly less frequently than others |
Sort the table by count, ratio, or pattern text to prioritize which patterns to investigate.
Find new, rare, and abnormal errors
Surface rare errors using low-occurrence sorting
Sorting templates by the lowest occurrence count is an effective way to uncover rare errors. These patterns often indicate edge cases, regressions, or early signs of larger issues that would be easy to miss in raw logs.
Identify newly introduced errors using First Seen
Templates include a First Seen timestamp that indicates when a pattern first appeared. This is especially useful after deployments or configuration changes. Use First Seen to confirm whether an error pattern existed before the change window or was introduced afterward.
Uncover logs not yet templated
Not all logs immediately belong to a template. To find new or unusual messages that have not yet formed a stable pattern, select the Logs tab and run:
This helps you catch newly introduced errors before they become frequent enough to form a template.
Investigate a specific pattern
Filter logs by template ID
Each template has a unique template ID you can use as a stable handle during investigation. Filter by template ID to view all log entries that belong to the same pattern, and confirm whether an error is isolated to a specific host, environment, or service.
Combine template ID with Lucene filters
Narrow your investigation by combining a template ID with additional Lucene filters, such as server, environment, or metadata fields. This helps you understand how a pattern behaves across different parts of your system.
Select any pattern row to:
- View the individual log entries that match the pattern.
- Inspect specific entries in the log details panel.
- Select Filter to pattern to add the pattern as a filter and view only the matching logs in the main results table.
Reduce noise during active investigations
Unclassified templates can obscure more important signals. To remove them from the current view, click the view icon and exclude unclassified templates. Removing a template from view does not delete data — all underlying log entries remain fully searchable.
To remove other templates from view, filter them out with Lucene.
Create alerts on patterns
To monitor a pattern automatically:
- Select the pattern you want to alert on.
- Select Create alert from the pattern row actions.
- Configure the alert conditions and notification settings.
You can also configure alerts to trigger when new templates appear, enabling proactive detection of new error patterns without manually inspecting logs. This is especially useful after deployments.
Automate error detection workflows
The Insights API lets you query and analyze log patterns programmatically. Use it to fetch top errors, most recent errors, and other actionable data.
A common automation pattern is to run a post-deployment check that looks for newly introduced templates or errors after a short delay, then send notifications to external systems such as Slack for fast feedback.
Limits
| Limit | Value |
|---|---|
| Template branches | 1,000 maximum (defined by application, subsystem, and severity) |
| Logs per branch | 10,000 logs per template branch |
| Template retention | Up to 90 days after the last matching log |
New templates are created only after a pattern reaches a defined occurrence threshold, preventing unstable messages from forming misleading templates too early.
Unclassified logs
Logs may remain unclassified when message fields have very high cardinality, when messages are excessively long, or when their structure prevents reliable pattern extraction. Unclassified logs are not dropped — they remain fully visible and searchable.
Filter for unclassified logs with NOT _exists_: coralogix.templateId. Use branch-level details to identify root causes and decide whether parsing, normalization, or logging changes are needed.