Environment Filter
Overview
The Environment filter in APM provides an easy and consistent way to query, filter, and group APM telemetry data by environment (for example, dev, staging, or prod). It is available for Span Metrics users in both compact and full modes.
Note
The Environment filter is optional.
When configured, the Environment filter becomes a global filter across all APM views:
- Main pages (Service Catalog, Database Catalog): Filterable only by Environment.
- Internal drilldowns: The Environment selection persists while additional dynamic filters can be applied. These are always scoped to the selected environment.
The following image shows the Environment filter enabled in APM.
Why use Environment filter?
The Environment filter helps to organize and analyze telemetry data according to where code is running. This prevents signal overlap between environments and keeps dashboards, alerts, and investigations scoped to the correct context.
User benefits
- Faster triage and on-call: Focus on a specific environment (for example,
production) without noise from test or staging data. - Release validation: Compare latency and error rates between staging and production during rollouts.
- Cost and optimization: Attribute ingest volume, error hotspots, or heavy endpoints to particular environments for better resource management.
Configuration
Coralogix Helm deployments (Kubernetes)
Set the deploymentEnvironmentName once in Helm, and enable resourceDetection:
# values.yaml
global:
deploymentEnvironmentName: "<dev|staging|prod>"
presets:
resourceDetection:
enabled: true
# Uses the value above to populate OTEL resource attributes
deploymentEnvironmentName: "<dev|staging|prod>"
How it works:
global.deploymentEnvironmentName: The single value you define in Helm.presets.resourceDetection.deploymentEnvironmentName: Applies that value as the OpenTelemetry resource attributedeployment.environment.namein the Collector configuration.
This setup ensures consistent Environment values across all rendered templates and avoids duplication.
Note
When using the k8s.cluster.name label, its value is automatically populated into deploymentEnvironmentName.
Non-Helm deployments (non-Kubernetes or custom)
For non-Kubernetes or custom installations, define the deployment.environment.name value directly in the Collector configuration:
processors:
resourcedetection:
detectors: [env, system]
override: false
resource:
attributes:
- key: deployment.environment.name
value: "staging" # or ${env:DEPLOY_ENV}
action: upsert
service:
pipelines:
traces:
processors: [resourcedetection, resource]
metrics:
processors: [resourcedetection, resource]
logs:
processors: [resourcedetection, resource]
How it works
- The
deployment.environment.nameresource attribute is added at the OpenTelemetry Collector or agent layer and automatically included in all signals (traces, metrics, and logs). - You can set this as a fixed string (for example,
"staging") or use an environment variable (for example,${env:DEPLOY_ENV}) for flexibility across deployments.
UI changes with Environment filter
| View | Behavior |
|---|---|
| Main pages (Service Catalog, DB Catalog) | Filterable only by Environment |
| Internal drilldowns | Environment selection persists; additional dynamic filters operate within that Environment |
