Mobile Performance
Identify and investigate mobile performance issues across your app, starting with a high-level overview and drilling down to root causes.
What you can detect with Mobile Performance
Mobile Performance helps you understand how your mobile applications behave in real user environments. It focuses on performance data that directly affect user experience, such as slow app startup, unresponsive screens, rendering issues, crashes, and excessive resource usage. By surfacing these signals at the application and screen level, you can quickly identify performance degradation and investigate its root causes before taking corrective action.
Analyze key operating metrics in the Mobile Performance view
The Mobile Performance view is the primary entry point for analyzing mobile performance data in RUM. It provides a structured overview of performance across your applications and screens, helping you spot regressions, outliers, and trends.
This view combines 3 complementary perspectives:
- Key performance indicators (KPIs): A snapshot of core performance metrics such as startup times, crashes, ANRs, frame rendering, CPU usage, and memory usage.
- Performance graph: Visualize trends over time allowing you to visualize spikes, slowdowns, or improvements
- Data table: Break down performance metrics by mobile attributes, such as application and views, to identify which screens contribute most to poor user experience.
You can apply filters such as application name, version, environment, platform, device, and time range to focus the analysis on a specific context. These controls behave consistently across RUM views.
Available actions in the Mobile Performance view
The Mobile Performance view is designed not only to surface performance data, but also to guide you in deeper investigation. Different parts of the view provide different action paths, depending on whether you are summarizing data, exploring trends, or investigating a specific screen.
KPI actions
Each KPI includes a menu with actions that reveal how its value is calculated.
Available actions:
- View query: Displays the DataPrime query used to compute the KPI value.
- Copy query: Copies the underlying DataPrime query to your clipboard.
These actions are intended for advanced analysis and validation. They allow you to understand exactly which data and aggregation logic power the KPI. This is useful when validating calculations or extending the analysis beyond the Mobile Performance view.
KPI values always reflect the overall filtered context (such as application, environment, and time range) and are not affected by Group By selections applied elsewhere on the page.
You can continue the investigation in Explore or Custom Dashboards.
Graph interactions
The performance graph lets you explore trends by changing what is measured, how it is aggregated, and the time range in focus.
You can:
- Select a different performance metric and aggregation to change how trends are summarized.
- Adjust grouping dimensions to break down trends by additional attributes.
- Select a time range directly on the graph by clicking and dragging, which updates the global time picker and refreshes all data on the page.
The order of selected grouping dimensions determines how data is organized and ranked in both the graph and the screens table. When performance data is available for the selected grouping, the graph and table update together to reflect the same results.
Graph interactions are exploratory rather than navigational. They help you identify spikes, regressions, and patterns, but do not take you to another page or change the underlying investigation scope.
Row-level actions in the data table
The data table provides the richest set of actions and serves as the primary launch point for deeper investigation.
Each row represents a specific application and screen combination and exposes contextual actions, including:
- Filter in: Adds the selected row’s values to the page filters, narrowing the entire view to that specific screen context.
- Copy query: Copies the DataPrime query used to compute the aggregated metrics for that row.
- Explore logs: Opens the Explore logs view with filters applied to the selected screen and time range.
- Explore sessions: Opens the User Sessions view scoped to the selected screen and time range.
- Explore errors: Opens the Error Tracking view when crash data is present for the selected row.
These actions are designed to help you transition from aggregated performance metrics to the underlying logs, sessions, or errors that explain what happened.
Note that copying the query from a row returns a DataPrime query scoped specifically to that row’s metrics, while the search query shown on the page represents the broader context used to build the entire view.
Drill down on performance issues in context
When a screen shows degraded performance, you can drill down to investigate the issue in context. This deeper view focuses on performance signals for a specific screen and time range, and helps explain why performance deteriorated.
In this context, performance metrics are correlated with additional signals, such as:
- Errors, which may spike during crashes, slow startups, or unresponsive periods.
- Network performance, where slow or failing requests can contribute to long load times or stalled screens.
By correlating performance metrics with these signals, you can narrow down potential root causes and understand which user actions or system behaviors led to a poor experience.
How performance data is collected
Mobile Performance is powered by mobile vitals metrics automatically collected by the Coralogix mobile SDKs. Collection behavior varies by platform:
The SDK continuously samples operating system and application signals on the device and aggregates them before sending them to Coralogix. This process does not require additional instrumentation.
Collection behavior:
- Sampling occurs roughly every 1 second on the device
- Aggregation is performed every ~15 seconds
- Each aggregated payload includes the minimum, maximum, average, and 95th percentile values observed during the aggregation window
The SDK emits these measurements as structured RUM events containing a cx_rum.mobile_vitals_context object, alongside device, application version, and session metadata.
Because aggregation occurs on-device, very short sessions may not emit mobile vitals data.
Mobile performance metrics
Mobile Performance relies on a predefined set of mobile vitals metrics that describe application stability, responsiveness, rendering smoothness, and resource usage.
Startup and responsiveness
- Cold start: Time from launching the app from a terminated state to the first screen render (
ms). - Warm start: Time from returning the app to the foreground to a ready state (
ms). - ANR (Application Not Responding): Occurs when the main thread is blocked for several seconds, causing the app to become unresponsive (event count).
Stability
- Crash count: Number of times the application unexpectedly terminates during use (event count)
Rendering smoothness
- Frame rate (FPS): Average frames rendered per second (
fps) - Slow frames: Frames that exceed the refresh threshold (typically ~16 ms on 60 Hz displays)
Resource usage
- CPU usage: CPU utilization and process execution time (
%,ms) - Memory usage: Physical footprint, resident memory size, and utilization (
MB,%)
Query Mobile Performance data
All Mobile Performance metrics are backed by mobile vitals events and can be queried directly in Explore or DataPrime.
To locate mobile vitals events, filter for the presence of the mobile vitals context:
These events expose fields such as:
cx_rum.mobile_vitals_context.name– metric name (for example,cpu_usage,fps)cx_rum.mobile_vitals_context.value– measured valuecx_rum.mobile_vitals_context.units– unit of measure (percentage,ms,fps,mb)
For field references and additional examples, see Query RUM Logs – Mobile Vitals.
Advanced analysis in Custom Dashboards
In addition to the built-in Mobile Performance view, you can use Custom Dashboards to perform deeper or long-term analysis of mobile performance data.
Dashboards are useful when you want to:
- Track performance trends across releases or environments
- Compare performance across platforms, devices, or versions
- Combine mobile performance metrics with other observability signals
Each mobile performance metric can be charted, aggregated over time, and grouped by attributes such as application version, OS, or device.
The Mobile Vitals count by type widget displays the number of Mobile Vitals events emitted over time, grouped by metric type.
Use this visualization to confirm that the SDK reports metrics as expected and to compare event volume across different vitals.
Example DataPrime query:
source logs
| filter $d.cx_rum.event_context.type == 'mobile-vitals'
| filter $d.cx_rum.event_context.type != null
| filter $d.cx_rum.mobile_vitals_context.type != null
| groupby $m.timestamp / 153s as $d['timestamp'], $d.cx_rum.event_context.type as $d['cx_rum.event_context.type'], $d.cx_rum.mobile_vitals_context.type as $d['cx_rum.mobile_vitals_context.type'] agg count() as $d['count']
Build a Mobile Performance widget
Use the Query Builder to create a simple overview of all Mobile Vitals metrics in one chart.
- In the Filters panel, set a filter for RUM events
cx_rum.event_context.typeequal tomobile-vitals. - In the Functions panel, group by:
cx_rum.event_context.typecx_rum.mobile_vitals_context.name
- Set the aggregation to
Averageand selectcx_rum.mobile_vitals_context.value
Example dashboard panels
| Use case | Lucene filter | Suggested aggregation |
|---|---|---|
| CPU usage (p95) | cx_rum.mobile_vitals_context.name:"cpu_usage" | p95 over 1–5 min intervals |
| Memory footprint (avg) | cx_rum.mobile_vitals_context.name:"footprint_memory" | avg / p95 grouped by app_version |
| Frame rate (FPS) | cx_rum.mobile_vitals_context.type:"fps" | avg by device_context.osVersion |
| Slow / Frozen frames | cx_rum.mobile_vitals_context.slow_frozen: * | sum per interval |
For step-by-step dashboard creation, see Create and Manage Custom Dashboards.
Platform differences and configuration
- iOS and Android: Metrics rely on MetricKit for CPU, memory, and startup data.
- Android: Metrics use Android Vitals and system-level sampling.
- React Native: Metrics are collected from the native layer; lifecycle handling differs slightly.
- Cross-platform variance: CPU and memory values may differ because each OS uses its own APIs.
Individual mobile vitals metrics can be enabled or disabled through SDK configuration, depending on the platform. Learn more about how metrics performance data is collected.
Mobile data collection behavior and limitations
- Android and iOS: Individual Mobile Vital metrics (CPU, memory, FPS, start times, FPS) can be enabled or disabled in the SDK configuration. See SDK installation guides for more details:
- React Native: Mobile Vitals can be configured from
JavaScriptand forwarded to the native SDKs. See React Native Plugin installation for more details.. - Sampling interval: Data is collected continuously and aggregated every 15 s.
Note
Sessions shorter than 15 s may not emit Mobile Vitals events, because the SDK reports aggregated data in 15 s intervals.
Learn more
For query examples and field references, see Query RUM Logs – Mobile Vitals.



