Skip to content

TCO Optimizer

The Total Cost of Ownership (TCO) Optimizer for logs and traces reduces costs by aligning data priorities with the business value of your data. Critical data remains instantly searchable for real-time analysis and alerting, while lower-value data is routed to cost-efficient storage.

Gain control over data processing and storage

TCO Optimizer gives you precise control over how logs and traces are processed and stored. You create policies using the fields applicationsubsystem, and severity so that each data type is handled appropriately based on its importance.

This enables you to balance cost and performance—keeping high-priority logs hot for analysis, archiving compliance data, and managing monitoring data at scale—delivering full observability at a fraction of the cost.

TCO priority levels explained

When logs and traces are ingested, they are routed to one of three pipelines, known as priority levels, or blocked, based on your policies.

High

Business-critical or high-severity data is stored on fast, replicated SSDs, allowing queries, alerts, and investigations to complete within seconds. This is the default priority for unmatched logs and traces.

Medium

Data used for monitoring and statistics remain fully accessible for dashboards, alerts, anomaly detection, and ongoing analysis at scale.

Low

Data retained for compliance or post‑processing is archived immediately to minimize storage costs while remaining retrievable when needed.

Blocked

Data is dropped at ingestion and not stored. Users incur a minimal cost.

Features by priority level

Each priority level provides different capabilities across the platform:
FeatureHighMediumLow
AI Evaluators
Alerts
APM with Events2Metrics
APM with Span Metrics
Archive
Background Queries
Custom Dashboards
Data Enrichment
Events2Metrics
Lightning Queries
LiveTail
Log Parsing
Loggregation
Real User Monitoring
Session Replay

Note

Service Map and Service Catalog for APM are available regardless of priority.

About log & trace policies

Data is routed to priority levels based on the policies that you create. Policies use the fields applicationsubsystem, and severity to match logs and traces.

Evaluation order matters. Policies are evaluated from top to bottom. When a log or trace matches the first policy, no subsequent policies are applied. Logs and traces that do not match any policy go to the High priority by default. Reorder policies in the UI via drag‑and‑drop.

Usage overview

Use the Usage overview to understand how your daily quota is consumed across TCO pipelines and how much you save by routing data to lower-cost priorities.

Hover over any pipeline in the top bar to open the Usage overview panel.

The panel provides the following information.

Daily team quota

Shows your available daily unit quota. Coralogix uses a unit-based pricing model, where a unit is a universal billing metric per observability pillar. This value represents the total number of units your team can consume per day. For more information, see the daily unit quota documentation.

Yearly savings

Shows the percentage of cost savings achieved by TCO optimizations. Savings are calculated based on data routed to Medium and Low priorities rather than High.

Usage by priority

Shows how ingested data is distributed across TCO priorities. For each priority, you can see how many units are consumed by each data type, such as logs and traces, and how this contributes to your total daily quota.

For example, if your daily quota is 1,000 units and High priority data consumes 800 units, the panel breaks this down by data type, such as 600 units for logs and 200 units for traces. The percentage view shows how much of the quota each data type consumes within that priority.

Units per pillar

Shows unit consumption per observability pillar, broken down by TCO priority. This view helps you understand which pillars contribute most to your usage within each pipeline.

Use the Usage overview to quickly assess quota consumption, validate the impact of your TCO policies, and identify opportunities for further cost optimization.

Create a general policy

Open the new policy form

Navigate to Data Flow, then TCO Optimizer. Select + New policy.

Define details

  • Policy name — Required
  • Description — Optional context for collaborators
  • Policy order — Choose First (highest precedence) or Last (applies after existing policies)

Add filters

Use the Builder to choose which logs this policy applies to.

  • Click Select field and add one or more conditions for applicationsubsystem, and/or severity using operators isis notincludesstarts with.
  • Use + Add filter to combine conditions.

Examples

  • application = "payments-service"
  • subsystem = "worker"
  • severity IN ["ERROR", "WARN"]

Set priority

Choose the target priority for matching data: high (default)mediumlow, or block.

As you select a priority, the card displays the estimated Units (U) and the expected change compared to your baseline.

Set retention policy

Choose your archive retention policynonedefaultshortintermediate, or long.

Create

Click Create. The policy appears in the TCO Optimizer list in the order it was created.

Create a granular policy

You can reroute a subset of logs that would otherwise match a broader policy.

Example

  • General rule: all severities for application = "azure" and subsystem = "cdn" go to high.
  • Exception: logs with severity = "INFO" should go to Low.

Create a specific policy for the exception either in Log policies or directly from Log statistics. A more granular policy created from Log statistics is placed above the general policy, so it takes precedence.

Statistics

Statistics show all ingested logs or traces organized by applicationsubsystem, and severity, along with data usage for each group. Use it to understand where your data volume originates and to convert insights into precise TCO policies.

Use this view to:

  • Group by applicationsubsystemseverityaffecting policy, and priority.
  • Expand an application to see its subsystems, then expand a subsystem to see its severity breakdown.
  • See contribution to % QuotaData sent, and Units (U) for each combination.
  • Validate expected impact before saving.

Row menu actions

Row menu actions include:

  • Add rule — Pre‑fills the policy form with the row’s values.
  • Drill into logs — Opens a query filtered to the row’s values.
  • Copy as filter — Copies application/subsystem/severity chips for reuse.

Slice and filter the data

  • Group by chips — Choose which fields (e.g., applicationsubsystemseverity) to include in the breakdown. Drag and drop chips to rearrange them.
  • Search — Filter rows by name (supports partial matches).
  • Include/exclude values — Use chip menus to add or remove specific values.
  • Reset — Clear temporary filters and return to the default settings.

Create a policy from statistics

Turn any selection into a policy without leaving the page:

  1. Select a row.
  2. Choose Add rule. The Create policy form opens with the Builder pre‑filled:
    • application = "<selected-application>"
    • subsystem = "<selected-subsystem>" (when applicable)
    • severity IN ["<selected-severity>"]
  3. Choose a PriorityHigh (default)MediumLow, or Block.
  4. (Optional) Set Archive retentionNoneDefaultShortIntermediate, or Long.
  5. Review the Preview panel for Data sent% quota, and Units (U) impact.
  6. Toggle Enable policy on to activate immediately.
  7. Click Create.

The new policy appears in the TCO rules table. Remember that order matters; set Policy order to place it First (highest precedence) or Last.

Understand the “affecting policy” column

When a combination already matches a policy, the Affecting policy displays the policy that is currently in effect.

Use it to:

  • Avoid creating duplicate policies.
  • Spot combinations that fall into the default high priority (shown as  or none).
  • Decide whether to edit an existing policy or add a higher‑precedence one.

Log overrides: End-of-life and migration

The Override table in your TCO UI is slated for deprecation on March 31, 2026. From that point forward, all existing override rules will automatically appear as policies at the top of the Log policies panel. As granular policies, they will be executed before more general policies, preserving the current override functionality.

Before March 31, 2026, during the hybrid phase, the Override panel will continue to exist, and override APIs will remain functional.

How can I prepare?

UI users are strongly encouraged to migrate their overrides to granular log policies before that time. To do so, hover over an override, then click on .

  • The rule is removed from the override table.
  • A new TCO policy is created with the same conditions and settings.
  • The converted rule appears in the Log policies table and follows its execution order.
  • This action can’t be undone.

Should I delete or convert an existing override?

Compare the override TCO priority with the fallback priority. If the override priority and fallback priority are identical, the override may be deleted.

Find out more

Find out more in the Overrides EoL Announcement.