Skip to content

Limitations

Number limits

DataPrime supports numeric values up to a maximum of 2^53 (9,007,199,254,740,992). Using larger numbers may result in precision loss or unexpected behavior in queries.

Limitations in binary precision as defined by the IEEE 754 standard may cause certain decimal values to be stored or displayed imprecisely, even if they appear simple.

Limit on fields

As Coralogix ingests your logs, it converts them into a columnar Parquet file format. This transformation improves the performance of DataPrime queries.

Because logs — even within the same dataset — often have varying structures, they must all be normalized into a consistent column-based schema. This automatic normalization process prioritizes commonly used field names. As a result, less frequent or rarely used fields might be excluded from the Parquet schema. However, this doesn't mean the data is lost, it simply wasn't included in the Parquet column structure.

Each Parquet file in Coralogix is limited to a maximum of 5,100 columns. To ensure completeness, the latest version of DataPrime also stores the full original JSON document within a special column. If a query references a field that isn’t among the 5,100 Parquet columns, DataPrime will fall back to searching the raw JSON to retrieve the data.

Max query results

The number of results returned by a query depends on the type of the query and where it's being executed.
Query TypeDefault LimitMaximum Limit
Web interface queries
Direct aggregation2,0005,000
Direct raw dataN/A15,000
Archive query2,00030,000
Background archive queryN/A1,000,000
API queries
Direct aggregation2,0005,000
Direct raw data2,00015,000
Archive query2,00050,000
Background archive queryN/A1,000,000

Shuffle limit

A shuffle limit refers to a constraint on the amount of data that can be processed during operations that require data shuffling, such as joins or aggregations. When a query involves significant data movement—especially in joins with large datasets—exceeding this limit can trigger a warning like shuffleFileSizeLimitReachedWarning.
LocationMax shuffle size
Explore1GB
API1GB
Background queries10GB

Background query limitations

When performing background queries—whether via the Coralogix platform or by using the API—you may encounter a number of system-enforced limits or errors. These are typically related to data volume, storage access, or execution time. The table below outlines the most common error codes and messages you might receive, along with explanations and guidance to help you understand and resolve them effectively.
ErrorExplanationDescription
"MAX_RESULTS""Max results is 1000000. Refine your query."The query exceeded the maximum number of rows allowed for a single response. This usually indicates that the result set is too large; try reducing the time range or applying filters.
"SCANNED_BYTES_LIMIT""Scanned bytes limit has been reached."The query attempted to scan more data than permitted by your tier or plan. Reduce the data scope, apply filters, or request access to higher query limits.
"BLOCK_LIMIT""Blocks limit has been reached."Too many data blocks were involved in processing the query. This often results from querying a large time range or high-cardinality data. Consider narrowing the query scope.
"METASTORE_DATA_MISSING""No data exists in your COS bucket for the given timeframe."The query timeframe points to archived storage, but there is no available metadata for this period. Check that logs were archived and the dates are correct.
"BUCKET_ACCESS_DENIED""Access to bucket denied."The background query engine was unable to access the specified object storage (COS) bucket due to permission issues. Verify your integration or cloud storage access settings.
"BUCKET_READ_FAILED""Failed to read from bucket."A general I/O error occurred while attempting to read archived data. This may be due to connectivity, permissions, or a corrupted file.
"BUCKET_MISSING_DATA""Data partially missing from bucket."Some of the expected data blocks could not be found or read from your object store. Results may be incomplete.
"SCROLL_TIMEOUT""Scroll timed out."The server took too long to scroll through the result set, usually due to large volumes or resource constraints. Try running a smaller or more efficient query.

Scanned bytes limit

The number of bytes scanned (for high-tier data) is limited to 100MB for for OpenSearch queries.

This limitation is placed on fetching 100 MB of high-tier data. It does not limit the scanning within the database storage.

Block limit

In Coralogix data is divided into logical units called blocks. A block is a group of log data bundled together, typically by time intervals.

Each query over archived data must load and scan one or more of these blocks to extract the requested log lines.

The BLOCK_LIMIT error occurs when your query attempts to process too many blocks in a single execution, which exceeds the system-defined cap. This can happen due to:

  • Large time ranges
  • Unfiltered queries
  • Queries without time filters, specific field constraints, or where clauses are more likely to hit this limit

Deleting background queries

Background queries cannot be manually deleted, but they automatically expire after 30 days. If a background query is canceled, all associated data will be deleted immediately. However, the query's metadata will persist for 30 days.

Execution time limitations

Queries are limited to a maximum execution time of 5 minutes in Explore and 30 minutes for background queries.

Queued background queries

A maximum queue of 50 background queries is allowed. When this limit is reached, new queries cannot be added to the queue until your team’s quota rises above zero.

Latency

Results may take longer to process due to support for extended time ranges and larger data scans.

API rate limit

The number of requests is capped at 30 per minute.