5 Strategies for Mitigating Kubernetes Security Risks
Since Google first introduced Kubernetes, it’s become one of the most popular DevOps platforms on the market. Unfortunately, increasingly widespread usage has made Kubernetes a growing…
Whether you are just starting your observability journey or already are an expert, our courses will help advance your knowledge and practical skills.
Expert insight, best practices and information on everything related to Observability issues, trends and solutions.
Explore our guides on a broad range of observability related topics.
As the container orchestration platform of choice for many enterprises, Kubernetes (or K8s, as it’s often written) is an obvious target for cybercriminals. In its early days, the sheer complexity of managing your own Kubernetes deployment meant it was easy to miss security flaws and introduce loopholes.
Now that the platform has evolved and been managed, Kubernetes monitoring services are available from all major cloud vendors, and Kubernetes security best practices have been developed and defined. While no security measure will provide absolute protection from attack, applying these techniques consistently and correctly will certainly decrease the likelihood of your containerized deployment being hacked.
The recommended approach to securing a Kubernetes deployment uses a layered strategy, modeled on the defense in depth paradigm (DiD). In the context of information technology, defense in depth is a security pattern that uses multiple layers of redundancy to protect a system from attack.
Rather than relying on a single security perimeter to protect against all attacks, a defense in depth approach acknowledges the risk that defenses may be breached and deploys additional protections at intermediate and lower levels of the architecture. That way, if one line of defense is breached, there are additional obstacles in place to impede an attacker’s progress.
So how does this apply to Kubernetes? Kubernetes is deployed to a computing cluster that is made up of multiple worker nodes together with nodes hosting the control plane components (including the API server and database).
Each worker node is simply a machine hosting one or more pods, together with the K8s agent (kubelet), network proxy (kube-proxy), and container runtime. Each pod hosts a container that runs some of your application code. Finally, as a cloud-native platform, K8s is typically deployed to cloud-hosted infrastructure, which means you can easily increase the number of nodes in the cluster to meet demands.
You can think of a Kubernetes deployment in terms of these four layers – your code, the containers the code runs in, the cluster used to deploy the containers, and the cloud (or on-premise) infrastructure hosting the cluster – the four Cs of cloud-native security. Applying Kubernetes security best practices at each of these levels helps to create defense in depth.
Kubernetes makes it easier to deploy application code using containers and enables you to leverage the benefits of cloud infrastructure for hosting those containers. The code you run in your containers is both an obvious attack vector and the layer over which you have the most control.
When securing your code, building security considerations into your software development process early on – also known as “shifting security to the left” – is more efficient than waiting until the functionality has been developed before checking for security flaws.
One example of doing this to scanning your code changes regularly (either as you write or as an early step in the CI/CD pipeline) with static code analyzers and software composition analysis tools. These help to catch known exploits in your chosen framework and third-party dependencies, which could otherwise leave your application vulnerable to attack.
When developing new features for a containerized application, you also need to consider how your containers will communicate with each other. This includes ensuring communications between containers are encrypted and limiting exposed ports. Taking a zero-trust approach here helps protect your application and your data; if an attacker finds a way in, at least they won’t immediately gain unfettered access to your entire system.
When Kubernetes deploys an instance of a new container, it first has to fetch the container image from a container registry. This can be the Docker public registry, another specified public registry, or a private container registry. Unfortunately, public container registries have become a popular attack vector.
This is because open-source container images provide a convenient way to evade an organization’s security perimeter and deploy malicious code directly onto a cluster, such as crypto-mining operations and bot farms. Scanning container images for known vulnerabilities and establishing a secure chain of trust for the images you deploy to your cluster is essential.
When building containers, applying the principle of least privilege will help to prevent malicious actors that have managed to gain access to your cluster from accessing sensitive data or modifying the configuration to suit their own ends.
As a minimum, configure the container to use a user with minimal privileges (rather than root access) and disable privilege escalation. If some root permissions are required, grant those specific capabilities rather than all. With Kubernetes, these settings can be configured for containers or pods using the security context. This makes it easier to apply security settings consistently across all pods and containers in your cluster.
You may also want to consider setting resource limits to restrict the number of pods or services that can be created, and the amount of CPU, memory, and disk space that can be consumed, according to your application’s needs. This reduces the scope for misuse of your infrastructure and mitigates the impact of denial-of-service attacks.
A Kubernetes cluster is made up of the control plane and data plane elements. The control plane is responsible for coordinating the cluster, whereas the data plane consists of the worker nodes hosting the pods, K8s agent (kubelet), and other elements required for the containers to run.
On the control plane side, both the Kubernetes API and the key-value store (etcd) require specific attention. All communications – from end-users, cluster elements, and external resources – are routed through the K8s API. Ideally, all calls to the API, from inside and outside the cluster, should be encrypted with TLS, authenticated, and authorized before being allowed through.
When you set up the cluster, you should specify the authentication mechanisms to be used for human users and service worker accounts. Once authenticated, requests should be authorized using the built-in role-based access control (RBAC) component.
Kubernetes requires a key-value store for all cluster data. Access to the data store effectively grants access to the whole cluster, as you can view and (if you have write access) modify the configuration details, pod settings, and running workloads.
It’s therefore essential to restrict access to the database and secure your database backups. Support for encrypting secret data at rest was promoted from beta in late 2020 and should also be enabled where possible.
Within the data plane, it’s good practice to restrict access to the Kubelet API, which is used to control each worker node and the containers it hosts. By default, anonymous access is permitted, so this must be disabled for production deployments at the very least.
For particularly sensitive workloads, you may also want to consider a sandboxed or virtualized container runtime for increased security. These reduce the attack surface, but at the cost of reduced performance compared to mainstream runtimes such as Docker or CRI-O.
You can learn more about securing your cluster from the Kubernetes documentation.
K8s is cloud-native, but it’s possible to run it on-premises too. When using a managed K8s service, such as Amazon EKS or Microsoft AKS, your cloud provider will handle the physical security of your infrastructure and many aspects of the cybersecurity too.
If you’re running your own Kubernetes deployment, either in the cloud or hosted on-premise, you need to ensure you’re applying infrastructure security best practices. For cloud-hosted deployments, follow your cloud provider’s guidance on security and implement user account protocols to avoid unused accounts remaining active, restrict permissions, and require multi-factor authentication.
For on-premise infrastructure, you’ll also need to keep servers patched and up-to-date, maintain a firewall and implement other network security, potentially use IP allow lists, or block lists to limit access, and ensure physical security.
As a container orchestration platform, Kubernetes is both powerful and flexible. While this allows organizations to customize it to their needs, it also places the burden of security on IT admins and SecOps staff. A good understanding of Kubernetes security best practices, including how security can be built in at every level of a K8s deployment, and the specific needs of your organization and the application, are essential.
Cybersecurity is not a fire-and-forget exercise. Once you have architected and deployed your cluster with security in mind, the next phase is to ensure your defenses are working as expected. Building observability into your Kubernetes deployment will help you to develop and maintain a good understanding of how your system is operating and monitor running workloads.
Since Google first introduced Kubernetes, it’s become one of the most popular DevOps platforms on the market. Unfortunately, increasingly widespread usage has made Kubernetes a growing…
If you’re involved in IT, you’ve likely come across the word “Kubernetes.” It’s a Greek word that means “boat.” It’s one of the most exciting developments…
As our customers scale and utilize Coralogix for more teams and use cases, we decided to make their lives easier and allow them to set up…