[Workshop Alert] Mastering Observability with OpenTelemetry Fundamentals - Register Now!

Prometheus Operator: The Basics and a Quick Tutorial

  • 8 min read

What Is the Prometheus Operator?

The Prometheus Operator is a software extension for Kubernetes that provides a streamlined way to deploy and manage Prometheus monitoring instances within a Kubernetes cluster. It is designed to make running Prometheus simpler and more efficient by leveraging Kubernetes’ native management features. 

The operator automates the monitoring setup, offering a higher level of integration and scalability for Kubernetes-based applications. Prometheus, an open-source monitoring and alerting toolkit, is widely used for its powerful querying language and ability to handle multi-dimensional data collection and querying. 

Integrating Prometheus with Kubernetes manually can be complex and time-consuming. The Prometheus Operator addresses these challenges by encapsulating the operational knowledge required to run Prometheus in Kubernetes. It does this through the introduction of custom resources for managing Prometheus, Alertmanager, and related components.

In this article, you will learn:

Prometheus Operator Features 

The Prometheus Operator offers the following features.

Kubernetes Custom Resources

The operator introduces several custom resources specific to Prometheus. These custom resources allow for defining and managing Prometheus components within a Kubernetes cluster.

Custom resources facilitate a more Kubernetes-native approach to configuring Prometheus instances. They bridge the gap between Prometheus configuration and Kubernetes resource management, enabling seamless updates and management of monitoring infrastructure.

Simplified Deployment Configuration

By utilizing custom resources, users can deploy and configure monitoring solutions without extensive Prometheus knowledge. This abstraction lowers the barrier to entry for using Prometheus in Kubernetes environments.

The simplified process enables quick adjustments and updates to the deployment, enhancing operational efficiency. It allows for focusing on monitoring metrics rather than managing deployment complexities.

Prometheus Target Configuration

Prometheus target configuration through the operator allows automatic discovery of services and endpoints to monitor. This dynamic target management adapts as the Kubernetes cluster changes, reducing manual configuration efforts.

The operator’s approach to configuring targets ensures that Prometheus always has an up-to-date view of the cluster’s state. This makes it easier for teams to maintain accurate and reliable monitoring data across services.

Understanding Prometheus Operator Custom Resource Definitions 

Custom Resource Definitions (CRDs) allow for the extension of the Kubernetes API to support custom objects. The Prometheus Operator leverages CRDs to introduce several custom resources that are specific to managing Prometheus and its components. These CRDs are essential for bridging the Kubernetes API with Prometheus’ configuration needs, providing a native Kubernetes experience for managing the monitoring stack.

The primary CRDs introduced by the Prometheus Operator include:

  • Prometheus: Represents a Prometheus instance. It allows the definition of the desired state of a Prometheus server within the cluster, including its version, configuration, storage, and alerting settings.
  • ServiceMonitor: Automates the discovery of service endpoints for monitoring. This CRD specifies how groups of services should be monitored by a Prometheus instance. The operator uses this information to dynamically configure Prometheus targets based on the current state of the services running in Kubernetes.
  • Alertmanager: Defines an Alertmanager instance and its configuration. Alertmanager handles alerts sent by Prometheus servers and is responsible for routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie.
  • PrometheusRule: Specifies sets of alerting and recording rules to be evaluated by Prometheus instances. This CRD makes it easy to define and manage rules for generating alerts or aggregating data within the Kubernetes ecosystem.

Tutorial: Getting Started with the Prometheus Operator 

Here’s a walkthrough of how to set up and start using the Prometheus Operator. The code in this tutorial was shared in the Prometheus documentation.

Install the Operator

To install the Prometheus Operator, start by deploying its Custom Resource Definitions (CRDs) along with the operator itself, ensuring it has the necessary RBAC (Role-Based Access Control) resources for operation. This is done by executing a series of commands in your terminal.

First, determine the latest version of the Prometheus Operator by fetching its tag name from the GitHub repository. Then, use this tag to download and apply the operator’s deployment bundle using kubectl. The process looks like this:

LATEST=$(curl -s https://api.github.com/repos/prometheus-operator/prometheus-operator/releases/latest | jq -cr .tag_name)
curl -sL
https://github.com/prometheus-operator/prometheus-operator/releases/download/${LATEST}/bundle.yaml | kubectl create -f -

Note: You may have to install jq (for Ubuntu/Debian) using the command apt install jq

After initiating the deployment, it might take a few minutes for the Prometheus Operator to be fully operational. You can verify the operator is ready by using the kubectl wait command, checking for the Ready condition of the pods associated with the prometheus-operator in the default namespace:

kubectl wait --for=condition=Ready pods -l app.kubernetes.io/name=prometheus-operator -n default

The successful execution of these commands confirms the Prometheus Operator is installed and ready to manage Prometheus and Alertmanager clusters within your Kubernetes environment.

Deploy Prometheus

To deploy Prometheus in a Kubernetes cluster, especially if RBAC authorization is enabled, it’s necessary to first establish the appropriate permissions for the Prometheus service account. This involves creating a service account, a ClusterRole with necessary permissions, and a ClusterRoleBinding to grant those permissions to the service account. The required manifests are as follows:

ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus

ClusterRole:

Define a ClusterRole that specifies the permissions Prometheus needs to access resources like nodes, services, endpoints, and pods across the cluster:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]

ClusterRoleBinding:

Link the service account to the defined ClusterRole object, allowing Prometheus to use the specified permissions:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default

Apply these resources to your cluster to ensure Prometheus has the necessary permissions.

The next step is to deploy Prometheus itself. This involves defining a Prometheus custom resource that specifies how Prometheus should be deployed, which ServiceMonitors it should discover, and other configuration details such as resource requests.

Prometheus Custom Resource:

To deploy Prometheus, create a Prometheus custom resource with the following specification. This resource outlines the desired state of the Prometheus instance, including the service account name, resource requests, and which ServiceMonitors to include for monitoring:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
resources:
requests:
memory: 400Mi
enableAdminAPI: false

This Prometheus instance is configured to automatically discover and use ServiceMonitors that have the team: frontend label, allowing the frontend team to manage their own ServiceMonitors and services. Deploy this resource to your cluster to initiate the Prometheus setup.

Note: With this setup, if you have any resource with the key/value pair team: frontend, it will be automatically selected for metrics scrapping. You can add this label to a pod to enable monitoring for it, like this: kubectl label pods my-pod team=frontend

Use a PodMonitor

In addition to ServiceMonitors, Prometheus can also use PodMonitors for scraping metrics directly from pods, bypassing the need for a Service object. This is particularly useful for scenarios where pods are dynamically created and might not always be covered by a stable service.

PodMonitor Resource:

To monitor an application using a PodMonitor, define a PodMonitor resource similar to the following. This tells Prometheus to scrape metrics from pods that match our sample application’s label:

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: demo-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: demo-app
podMetricsEndpoints:
- port: web

The Prometheus custom resource needs to be configured to select PodMonitors based on specific labels. If you are using PodMonitors alongside ServiceMonitors, ensure that your Prometheus configuration includes both selectors.

Expose the Prometheus Service

To access the Prometheus UI and verify that your metrics are being collected, you need to expose the Prometheus service. A simple way to do this for testing purposes is to use a NodePort service.

NodePort Service for Prometheus:

Create a service with the type NodePort to expose the Prometheus server on a specific port accessible from outside the cluster:

apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: prometheus

Deploy this service, and you can access the Prometheus web UI by navigating to any of your cluster node’s IP addresses on port 30900. There, you should be able to see the targets Prometheus is scraping, including the instances of your example application.

Expose the Prometheus Admin API

For advanced use cases, you may need to enable and expose the Prometheus Admin API, which provides capabilities for administrative tasks such as deleting series, cleaning up tombstones, and taking snapshots.

To enable the Admin API, set the enableAdminAPI flag to true in your Prometheus custom resource definition. Be aware that enabling this API exposes sensitive operations, so it should be done with caution and ideally protected by additional authentication mechanisms:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
resources:
requests:
memory: 400Mi
enableAdminAPI: true

Deploying this configuration change will make the Admin API available. However, given its potential impact, ensure you have considered security implications and access controls to prevent unauthorized use.

Managed Cloud Native Monitoring with Coralogix

Coralogix sets itself apart in observability with its modern architecture, enabling real-time insights into logs, metrics, and traces with built-in cost optimization. Coralogix’s straightforward pricing covers all its platform offerings including APM, RUM, SIEM, infrastructure monitoring and much more. With unparalleled support that features less than 1 minute response times and 1 hour resolution times, Coralogix is a leading choice for thousands of organizations across the globe.

Learn about the Coralogix Prometheus integration

Where Modern Observability
and Financial Savvy Meet.