Since Google first introduced Kubernetes, it’s become one of the most popular DevOps platforms on the market.
Unfortunately, increasingly widespread usage has made Kubernetes a growing target for hackers. To illustrate the scale of the problem, a Stackrox report found that over 90% of respondents had experienced some form of security breach in 2020. These breaches were due primarily to poorly-implemented Kubernetes security.
This is such a serious problem it is even slowing the pace of innovation. Businesses are struggling to find people with the right Kubernetes skills to tackle security issues.
The way we see it, making Kubernetes secure is part of a wider conversation around integrating cybersecurity into DevOps practice. We’ve previously talked about how organizations are embracing DevSecOps as a way of baking security into DevOps.
Kubernetes security is really about taking those insights and applying them to Kubernetes systems.
Many IT systems enhance security by giving different access rights to different levels of users. Kubernetes is no exception. RBAC Authorization allows you to control who can access your Kubernetes cluster. This reduces the possibility of an unauthorized third party stealing sensitive information.
RBAC can be easily enabled with a Kubernetes command that includes ‘RBAC’ in its authorization mode flags. For example:
kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options
The Kubernetes API allows you to specify the access rights to a cluster using four special-purpose objects.
These two objects define access permissions through sets of rules. The ClusterRole object can define these rules over your whole Kubernetes cluster. A typical ClusterRole might look like this.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: secret-reader rules: - apiGroups: [""] # # at the HTTP level, the name of the resource for accessing Secret # objects is "secrets" resources: ["secrets"] verbs: ["get", "watch", "list"]
ClusterRoles are useful for granting access to nodes, the basic computational units of clusters. They are also useful when you want to specify permissions for resources such as pods without specifying a namespace.
In contrast, the Role object is scoped to particular namespaces, virtual clusters that are contained within your cluster. An example Role might look something like this.
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: my-namespace name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]
Roles are useful when you want to define permissions for particular namespaces.
These two objects can take the permissions defined in a Role/ClusterRole and grant them to particular groups of users. In these objects, the users, called subjects) are linked to the Role through a reference called RoleRef, similar to how your contacts can be grouped into “work” or “home”.
ClusterRoleBinding gives the permissions for a specific role to a group of users across an entire cluster. To enhance security, RoleRef is immutable. Once ClusterRoleBinding has granted a group of users a particular role, that role can’t be swapped out for a different role without creating a new ClusterRoleBinding.
Kubernetes also lets you specify permissions for pods, which sit on top of clusters and contain your application. Through the use of Kubernetes security contexts, you can define access privileges with Policies.
Policies come in three flavors. Privileged is the most permissive policy. It’s useful if you’re an admin. After this comes Baseline. This has minimal restrictions and is appropriate for trusted users who aren’t admins.
Restricted is the most restricted policy. With security features such as disallowing containers from running non-root, it’s adapted for maximal pod-hardening. Restricted should be used for applications where Kubernetes security is critical.
Malicious actors often betray themselves through their effects on the systems they’re trying to penetrate. Looking out for anomalous changes in web traffic or CPU usage can alert you to a security breach in time for you to stop it before it does any real damage.
To track these kinds of metrics successfully, you need really good monitoring and logging. That’s where Coralogix comes in handy. You can use FluentD to integrate Coralogix logging into your Kubernetes cluster.
This lets you leverage the power of machine learning to extract insights and trends from your logs. Machine learning allows computers to detect patterns in large datasets. With this tool, Coralogix can use your logs to identify behavior that shows a divergence from the norm in the future.
In the context of Kubernetes security, this predictive capability can allow you to spot a potential data breach before it happens. The benefits this brings to cybersecurity can’t be overstated.
Additionally, the Coralogix Kubernetes Operator enables you to configure Coralogix so that it does just what you need for Kubernetes security.
In a previous post, we talked about the changing landscape of network security. In the early 2000s, most websites used 3-tier architectures which were vulnerable to attacks. The advent of containerized solutions like Kubernetes has increased security but required novel solutions to scale applications in a security-friendly way. Luckily, we’ve got service meshes to help with this.
A service mesh works to decouple security concerns from the particular application you happen to be running. Instead, security is handed off to the infrastructure layer through the use of a sidecar. One capability a service mesh has is encrypting traffic in a cluster. This prevents hackers from intercepting traffic, lowering the risk of data breaches.
In Kubernetes, service meshes typically integrate through the service mesh interface. This is a standard interface that provides features for the most common use cases, including security.
Service meshes can also help with observability. Observability, in this case, involves seeing how traffic flows between services. We’ve previously covered service meshes in the context of observability and monitoring more in-depth.
Due to the popularity of cloud-based solutions, many organizations are opting for cloud-native Kubernetes. Cloud-native security splits into four layers. Going from the bottom up these are cloud, cluster, container, and code.
We’ve already talked about cluster and container security earlier in this article, so let’s discuss cloud and code.
Cloud security is contingent on the security of whichever cloud provider you happen to be using. Kubernetes recommends you read their documentation to understand how good their security is.
Code security, by contrast, is an area where you can take a lot of initiative. A running Kubernetes application is a primary attack surface for potential hackers to exploit. Because your development team writes the application code, there’s plenty of opportunities to implement good security features.
For example, if you’re using third-party libraries, you should scan them for potential security vulnerabilities to avoid being caught off guard. It’s also good to make sure your application has as few ports exposed as possible. This limits the effective attack surface of your system, making it harder for malicious actors to penetrate.
Kubernetes is founded on the concept of containerization. Systems like Docker wrap your application in layers of containers, which perform the role of a traditional server, but without any complex setup and configuration.
When containerization isn’t done properly, Kubernetes security can be seriously compromised. Let’s look at Docker for example. Docker applications are made of layers. This means they are constructed a bit like a pastry. The innermost layer provides for basic language support while successive layers, or images, add functionality.
Because each Docker layer is maintained in a Docker Hub and under the control of a central repository owner, there is nothing to stop the inner layer from changing without warning. In the worst case, a Docker image can be intentionally modified by a hacker trying to cause a Kubernetes security breach.
The problem of Docker layers changing can be solved by changing how Docker layers are tagged. Each Docker layer normally has a latest tag, signifying it is the most recent update in Docker Hub. It’s possible to swap out latest for a version-specific tag like node:14.5.0. With this, you can stop the inner layers from changing and guarantee security for your application.
There are a couple of ways you can mitigate the risks of image hacking. First, you can use official images and clone them to your private repository. Second, you can utilize vulnerability scanning tools to vet Docker images for security flaws. Docker has its own vulnerability scanner but it’s only available if you’re on a Pro or Team plan.
Alternatively, there are third-party tools such as Clair. This particular tool scans external databases for known vulnerabilities and scans images layer by layer to check for vulnerabilities.
Google named their containerization solution Kubernetes, Greek for steersman/helmsman. To developers up to their necks in the vagaries of server management, Kubernetes can act like a lighthouse, guiding them smoothly through the high seas of CI/CD.
Kubernetes and containerization are fast becoming the most popular way to deploy and scale applications. But this popularity brings increasing security risks, particularly if DevOps teams aren’t always following best practices.