Understand and use ownership
Ownership gives you a clear, consistent way to understand who is responsible for every resource in your infrastructure. Each entity carries three attributes, environment, service, and team, that describe where it runs, what it supports, and who operates it. This context helps you interpret resources quickly, whether you’re examining a host, exploring a pod, or checking the state of a cluster.
Infra Explorer resolves ownership from several sources—declarative tags, Kubernetes inheritance, runtime metadata, and explicit UI entries and presents a single, consolidated view. You see the final resolved values, regardless of how they were defined.
What ownership represents
Ownership is based on three attributes:
| Attribute | Meaning | Examples |
|---|---|---|
| Environment | Where the resource runs | prod, staging, dev |
| Service | The application or component the resource belongs to | checkout, payments |
| Team | The group responsible for operating the resource | platform-team, sre-team |
Resources may have multiple values for an attribute when ownership is defined in more than one place. Infra Explorer shows the combined, final set.
Why ownership matters
Troubleshooting often starts with a simple but difficult question: who owns this?
In distributed environments, that answer is rarely obvious. Without consistent ownership, issues bounce between teams, investigations stall, and impact becomes harder to assess.
Ownership solves this by attaching clear, structured identity to each resource. Environment, service, and team values help you understand a resource’s purpose, blast radius, and accountability before you open a dashboard or trace.
Ownership is especially useful when:
- a resource becomes noisy or unstable and you need to know immediately who operates it
- you’re assessing business impact and want to see which services and environments are involved
- you’re navigating large infrastructures and want to focus on a specific team or application
- you’re reviewing a resource that already has related activity in Cases
Reliable ownership shortens investigations, reduces misrouted escalations, and helps teams work from a shared understanding.
Where ownership comes from
Ownership values come from four mechanisms. Infra Explorer merges these sources internally and displays only the final resolved values.
| Model | Description | Editable |
|---|---|---|
| Declarative (as code) | Defined through Kubernetes labels or cloud tags | Yes, using IaC |
| Inheritance | Passed from parent Kubernetes workloads to descendants | No |
| Discovery | Identified from available runtime metadata | No |
| Explicit (UI-based) | Added manually in the Infra Explorer UI | Yes |
Declarative, inherited, and discovered values are read-only. UI-based values can be added or removed when you need to fill gaps or correct missing tags.
Understanding multiple ownership values
It’s common for a resource to have more than one value for an attribute. This happens when tagging is defined in multiple systems, when different tag keys map to the same attribute, or when Kubernetes inheritance adds values from workloads. UI-based entries can add additional context when tagging is incomplete.
Multiple values help you understand where ownership originates and highlight areas where cleanup or alignment may be needed.
How ownership is assembled
Infra Explorer resolves ownership in a predictable order:
- Read declarative tags from Kubernetes or cloud resources.
- Apply ownership inherited from Kubernetes parents.
- Add ownership discovered at runtime.
- Add UI-based ownership.
- Merge and alphabetize the final values.
Only the combined set appears in the UI, making results clear even if tagging varies across environments.
Note
If a Kubernetes workload does not define a service value through labels, Infra Explorer uses the workload name itself (for example, the Deployment or StatefulSet name) as a fallback service value. This ensures every workload has at least one service identifier, even before tagging is fully standardized.
Before you begin
To use ownership in Infrastructure Explorer, make sure:
- Kubernetes Complete Observability is installed and sending metadata to Coralogix.
- Follow Getting Started with Kubernetes Monitoring.
- Infrastructure Explorer / Resource Catalog is enabled in the integration wizard.
- Your clusters are actively sending Kubernetes resource metadata (Deployments, Pods, Services, Namespaces, etc.).
Once these are in place, you can add ownership using Kubernetes labels and (optionally) cloud tags.
Add ownership using Kubernetes labels
Ownership in Kubernetes is derived from standard labels on your resources and resolved into three attributes:
environmentserviceteam
You can set these labels on workloads (recommended) or directly on Pods.
Supported Kubernetes label keys
You can use any of the following keys; Infra Explorer normalizes them:
Environment keys
environment,envcx_environment,cx_envapp.kubernetes.io/environment,app.kubernetes.io/env
Service keys
service,cx_serviceapp.kubernetes.io/service
If none of these service labels are present on a workload, Infra Explorer uses the Kubernetes workload metadata.name (for example, the Deployment or StatefulSet name) as the service value.
Team keys
team,cx_team
Example: Tag a Deployment with ownership
Apply labels to a Deployment spec. Infra Explorer will automatically propagate ownership to the Pods and containers created from it.
apiVersion: apps/v1
kind: Deployment
metadata:
name: checkout-api
namespace: payments
labels:
app.kubernetes.io/environment: prod
app.kubernetes.io/service: checkout
cx_team: payments-team
spec:
replicas: 3
selector:
matchLabels:
app: checkout-api
template:
metadata:
labels:
app: checkout-api
spec:
containers:
- name: checkout
image: ghcr.io/example/checkout:1.0.0
Result in Infra Explorer:
- Environment:
prod - Service:
checkout - Team:
payments-team - Pods and containers created by this Deployment inherit these ownership values.
Example: Tag a Pod when you can’t change the Deployment
If you don’t control the Deployment (for example, a third‑party chart), you can still define ownership by applying labels directly to the Pod spec (via Helm values, Kustomize, or a mutating webhook).
apiVersion: v1
kind: Pod
metadata:
name: third-party-worker
namespace: background-jobs
labels:
environment: staging
service: batch-processor
team: platform-team
spec:
containers:
- name: worker
image: ghcr.io/example/worker:2.3.1
Infra Explorer reads these labels and sets:
- Environment:
staging - Service:
batch-processor - Team:
platform-team
Ownership from cloud tags (AWS EC2)
For EC2‑based hosts, Infra Explorer can also derive ownership from EC2 tags:
Environment tags
CX_ENV_ID,cx_env,Env,environment
Service tags
CX_SERVICE_NAME,service
Team tags
team,cx_team
Example:
Infra Explorer resolves this to:
- Environment:
prod - Service:
checkout - Team:
payments-team
You can use cloud tags and Kubernetes labels together; Infra Explorer merges all values.
Handling multiple and conflicting ownership values
It’s normal to see multiple values for an attribute when:
- You define tags in more than one place (for example, a Deployment and the Pods it creates).
- Different keys (for example,
environmentandapp.kubernetes.io/environment) map to the same attribute. - A cloud tag assigns one team and a Kubernetes label assigns another.
- UI‑based ownership is added on top of declarative tags.
Infra Explorer keeps all values and shows the merged result without indicating the source of each value. This is normal in mixed or evolving tagging strategies and helps you spot inconsistencies and cleanup opportunities across teams and environments.
Verify your ownership configuration
After applying labels or tags, you can validate that ownership is working:
- Select Infrastructure, then Infra Explorer.
- Choose a view, for example Kubernetes → Pods or Kubernetes → Deployments.
- Use the Environment, Service, or Team filters in the left sidebar:
- Filter by
Environment = prodto see all production resources. - Filter by
Team = payments-teamto isolate everything owned by that team. - Click a resource row to open the side panel and select the Ownership section:
- Confirm that Environment, Service, and Team match the labels or tags you applied.
- Optionally, use Group by → Service / Environment / Team to visually confirm resources cluster as expected.
If the expected values don’t appear:
- Double‑check that labels are on the workload metadata (not only in the Pod template selectors).
- Make sure your cluster is using the latest Helm values or manifests and the integration has been redeployed.
- Verify that the Kubernetes Complete Observability integration is healthy and sending metadata.
View or edit ownership
You can view or update ownership on any resource.
- Go to Infrastructure, then Infra Explorer.
- Select a category such as Hosts, Agents, or Kubernetes.
- Choose a resource.
- Open Ownership to view its environment, service, and team.
- Select Edit to add or update UI-based values.
Values coming from declarative, inherited, or discovered sources are not editable.
Ownership in Infra Explorer
Ownership appears throughout Infra Explorer so you can read, filter, and organize resources by service, environment, and team.
Ownership in the table
Environment, Service, and Team are available as fields in the resource table. This lets you see who owns a resource directly alongside its configuration, status, and operational metrics.
For example, when reviewing Pods, you can scan the table to confirm that production Pods belong to the correct team.
Ownership filters in the sidebar
The left sidebar includes filters for Environment, Service, and Team. Each filter supports search and multi-select. Filtering is useful when you want to narrow the dataset without changing the table.
For example, applying Environment = prod gives you a production-only view, while filtering by Team = platform-team isolates everything that team operates.
Grouping results by ownership
The Group by control reorganizes the table into sections based on Environment, Service, or Team. Grouping helps you understand distribution and reveals inconsistencies quickly.
For example, grouping by Service shows how resources cluster by application, while grouping by Environment gives a clear view of how staging and production differ.
