Our next-gen architecture is built to help you make sense of your ever-growing data. Watch a 4-min demo video!

Kubernetes Logging: Examples and Best Practices

  • 8 min read

What Is Kubernetes Logging?

Kubernetes logging enables the collection, storage, and analysis of logs generated by the applications running within Kubernetes pods, as well as by the Kubernetes system components themselves. It’s critical for maintaining the reliability, security, and performance of applications in Kubernetes. Kubernetes logs are important for debugging, monitoring application performance, and ensuring security compliance.

Efficient logging in Kubernetes involves aggregating logs from various sources within the cluster, including application logs, system event logs, and logs from Kubernetes components and services. These logs can be used to gain insights into application behavior, system health, and to track interactions and movements within the cluster. 

Proper logging practices help in proactive problem resolution and can support informed decisions regarding scaling and resource allocation.

This is part of a series of articles about Kubernetes monitoring

In this article, you will learn:

The Importance of Logging in Kubernetes 

Here are some of the reasons it’s important to properly manage logs in Kubernetes environment:

  1. Troubleshooting and debugging: Enables quick identification and resolution of issues within applications and the Kubernetes infrastructure.
  2. Performance monitoring: Provides insights into application performance and resource utilization, helping in optimization efforts.
  3. Security and compliance: Tracking access and changes to the Kubernetes environment, assisting in security audits and regulatory compliance.
  4. Operational intelligence: Enables understanding the behavior of applications and the system as a whole, aiding in strategic planning and operational improvements.

What Should You Log in Kubernetes? 

Let’s look at the types of logs that can be recorded in Kubernetes.

Application Logs

Application logs record events within your applications. They’re critical for understanding application behavior and troubleshooting. Logging application-level events, errors, and transactions provide insight into the runtime state of applications.

Kubernetes Events

Kubernetes events log system-level occurrences, like pod creation and node failures. They are useful for real-time monitoring and alerting, offering a granular view of the cluster’s state. Event logs assist in proactive issue identification and capacity planning. They also serve as an audit trail for changes and incidents within the cluster.

Kubernetes Cluster Components

Logs from Kubernetes cluster components like kube-apiserver, kube-scheduler, and kubelet offer visibility into the orchestration layer. They track decisions and actions taken by these components. Understanding these logs is key to diagnosing issues with pod scheduling, API requests, and node communication. They help in fine-tuning cluster operations and ensuring resource efficiency.

Kubernetes Ingress

Ingress logs are useful for monitoring the traffic entering a Kubernetes cluster. They track requests routed to services, providing insights into application accessibility and performance. Anomalies in ingress logs can indicate security threats, such as unauthorized access attempts.

Kubernetes Audit Logs

Audit logs record sequential API calls, capturing who interacted with the cluster and what actions were performed. They’re important for security auditing and policy enforcement. Reviewing audit logs helps in detecting unauthorized access and ensuring compliance with security protocols. This ensures a secure and accountable cluster management process.

How Does Logging in Kubernetes Work?

Let’s look at how logs are collected in Kubernetes.

Basic Logging with Stdout and Stderr

In Kubernetes, the most basic form of logging is capturing the standard output (stdout) and standard error (stderr) streams from containers. This method does not require any additional configuration within the container or the application. The Kubernetes runtime automatically captures these streams and stores them, allowing operators to access the logs using the Kubernetes API or the kubectl command-line tool.

Logging through stdout and stderr is particularly useful for debugging purposes and for applications that log directly to these streams. However, this method has limitations in terms of log management and analysis capabilities. For example, it doesn’t support log rotation or retention policies out of the box, making it less suitable for production environments.

Using an Application-Level Logging Configuration

To overcome the limitations of basic logging, applications running in Kubernetes can be configured to generate logs in a more structured format, such as JSON, and write them to files within the container. This approach allows for more detailed logging, including the ability to specify log levels (error, warning, info) and to include structured data.

Application-level logging configurations often involve setting up the application to write logs to a volume that can be accessed by logging agents. These agents can then forward the logs to centralized logging solutions for aggregation, analysis, and long-term storage. 

Using a Logging Agent

Logging agents are deployed within the Kubernetes cluster to collect, process, and forward logs from various sources, including container stdout/stderr streams and application log files. These agents enable the integration of Kubernetes logs with external logging services and platforms.

Common logging agents used in Kubernetes environments include Fluentd, Logstash, and Filebeat. These agents can be deployed as standalone pods, as part of a DaemonSet running on each node, or alongside applications using sidecar containers. They are configured to collect logs, apply filtering and transformation rules, and then forward the logs to centralized logging solutions like Elasticsearch, Splunk, or a cloud provider’s logging service.

Using a logging agent offers several benefits, including the ability to aggregate logs from multiple sources, enrich logs with additional metadata, and implement complex processing and routing rules. 

Two Kubernetes Logging Examples

Using stdout and stderr

Consider the following pod manifest, which runs a container that continuously logs the current date to stdout:

metadata:
name: myapp

spec:

  containers:

  - name: myapp

    image: busybox

    args: [/bin/sh, -c, 'while true; do echo $(date); sleep 1; done']

After deploying this pod, you can run the kubectl logs pod-name command to see the resulting logs. This command interfaces with the kubelet service on the node, fetching the logs that have been automatically collected and stored. 

Using sidecar containers

For applications that do not inherently output logs to stdout and stderr, a sidecar container can be deployed alongside the main application container. This sidecar container is responsible for picking up application logs from a designated file and streaming them to stdout and stderr. 

This pattern allows for log manipulation, such as aggregating multiple log streams into a single stream or separating a single application log stream into multiple logical streams, each handled by a dedicated sidecar container.

Here’s an example of a pod configuration for persisting container logs through a sidecar container:

apiVersion: v1
kind: Pod

metadata:

  name: nginx-with-logging-sidecar

spec:

  containers:

  - name: nginx-container

    image: nginx

    ports:

    - containerPort: 80

  - name: sidecar-container

    image: nginx

    volumeMounts:

    - name: log-volume

      mountPath: /var/log/nginx

  volumes:

  - name: log-volume

    emptyDir: {}

This setup ensures that logs are collected and stored efficiently, and that they are accessible for analysis, aiding in the effective management of Kubernetes environments.

Best Practices for Kubernetes Logging 

Here are some best practices for ensuring a comprehensive logging strategy in Kubernetes.

Establish a Log Retention Policy

A well-defined policy helps in balancing the need for historical data with the costs associated with log storage. Determine the retention period based on regulatory requirements, operational needs, and the criticality of the logs. 

For less critical logs, consider shorter retention periods to conserve storage resources. Automate the purging of old logs to ensure compliance with the policy and to prevent storage overflow.

Use Labels and Annotations Wisely

Use labels to categorize logs by application, environment (e.g., production, staging), and severity level. This facilitates efficient filtering and querying of logs, making it easier to locate relevant information. 

Annotations can provide additional context, such as the version of an application or details about a deployment. Design a consistent labeling and annotation strategy to enhance log analysis and management.

Control Access to Logs with RBAC

Role-based access control (RBAC) is essential for securing access to Kubernetes logs. Define RBAC policies that restrict log access based on the principle of least privilege. Ensure that only authorized personnel, such as developers, operators, and security teams, have access to logs relevant to their roles. 

This protects sensitive information and reduces the risk of unauthorized access or data breaches. Implement separate roles for reading logs and for managing logging infrastructure to further enhance security.

Set Resource Limits on Log Collection Daemons

Log collection daemons, such as Fluentd, Logstash, or Filebeat, should have resource limits configured to prevent them from consuming excessive CPU and memory resources. Without limits, aggressive log collection processes can impact the performance of the Kubernetes cluster. 

Use Kubernetes resource requests and limits to specify the maximum amount of resources a daemon can use. This ensures that logging operations do not interfere with the performance of applications running in the cluster.

Optimize Logging Levels

Configure applications and Kubernetes components to log at appropriate levels (error, warning, info, debug) based on the environment and the criticality of the information. In production environments, consider limiting logs to warnings and errors to reduce volume. 

During development or troubleshooting phases, debug logs can be enabled for more detailed information. Regularly review and adjust logging levels to align with operational needs and to ensure that logs provide valuable insights without overwhelming storage and analysis systems.

Coralogix for Kubernetes Observability

Coralogix sets itself apart in observability with its modern architecture, enabling real-time insights into logs, metrics, and traces with built-in cost optimization. Coralogix’s straightforward pricing covers all its platform offerings including APM, RUM, SIEM, infrastructure monitoring and much more. With unparalleled support that features less than 1 minute response times and 1 hour resolution times, Coralogix is a leading choice for thousands of organizations across the globe.

Learn more about Coralogix for Kubernetes

Observability and Security
that Scale with You.