
Kubernetes Logging: How to Optimize your Logs for Efficiency
Get to know which log files are available in Kubernetes, which data they reveal, and how to analyze that data, in order to gain critical visibility into what's happening with your applications and troubleshoot problems effectively when they arise.


Part of the beauty of Kubernetes is that you can tell it how you want your containerized applications to operate, and Kubernetes then attempts to manage them automatically according to your specifications.
However, just because Kubernetes tries to do what you want doesn't mean that it always actually does what you request. As with any complex system, a variety of things can go wrong in Kubernetes, and you need to monitor the status of Kubernetes continuously to stay on top of those challenges.
That's why Kubernetes logging plays a central role in running Kubernetes effectively. By understanding which log files are available in Kubernetes, which data they reveal and how to analyze that data, you can gain critical visibility into what's happening with your applications. You can also troubleshoot problems effectively when they arise.
With those goals in mind, let's walk through everything K8s admins need to know about logging. This article breaks down the Kubernetes logging architecture, explains which log files are available in a Kubernetes cluster and walks through the process of managing logs using a logging agent.
Types of logs in a Kubernetes cluster
The first step in mastering Kubernetes logging is to understand which log files exist in a Kubernetes cluster.
This is a complex topic because Kubernetes includes many moving parts, and most of those parts generate their own log files. That said, at a high level, Kubernetes logs can be broken down into three main categories:
• System logs: These logs provide information about the state of the core components of Kubernetes itself, such as the API server and scheduler.
• Application logs: These are logs that provide insight into the status and health of applications running inside containers. For example, you can find application-level error messages here.
• Audit logs: Audit logs record actions taken by human users as well as system processes inside Kubernetes. They're valuable for researching changes that took place before a problem occurred. They can also help to identify potential security risks.
By collecting and analyzing all three of these types of logs, you can achieve comprehensive visibility into all major components of your Kubernetes environment – the control plane, the nodes that host applications and the applications themselves.
The Kubernetes logging architecture
One factor that complicates Kubernetes logging is that most components of Kubernetes are constantly changing, and some are not persistent. For example, when containers shut down, any data stored inside them, including logs, will go away with them.
To solve this challenge, Kubernetes uses a logging architecture – known as cluster-level logging – that decouples log storage and lifecycles from nodes, Pods and containers. Cluster-level logging provides a separate backend to store, analyze and query logs from various sources within Kubernetes.
That said, Kubernetes doesn't provide a native storage solution for hosting log data. You have to implement that on your own, using a third-party logging solution that integrates with Kubernetes.
As for actually generating log data, Kubernetes does that by writing logs from control plane components directly. In addition, a container runtime handles and redirects output generated by applications to stdout and strderr streams, which can then be turned into logs. Different container runtimes handle these streams in somewhat different ways (for example, the integration with kubelet uses the CRI logging format), but they all make it possible to log data about the status of applications as long as those applications can expose data to stdout and stderr.
Kubernetes log structures
Because there are so many different types of logs in Kubernetes, the exact structure of Kubernetes log files varies. But in general, logs typically include the following structural components:
• Timestamp: Timestamps record the time at which each log entry was generated, usually in the format of YYYY-MM-DD HH:MM:SS.microseconds.
• Log level: Log level identifies the severity of the log entries. For example, entries could be categorized at the info, warning or error levels.
• Component: This identifies the component or process that generated the log entry. The component could be the Kubernetes API server, a specific pod or a container.
• Message: The message contains the actual content of the log entry, which may include details about the event or error that occurred.
• Additional fields: Depending on the logging driver and configuration, Kubernetes log files may also include additional fields such as the Pod metadata and names, namespace, or container ID.
In most cases, this data is sufficient for understanding not just what happened, but also for investigating the context that caused it to happen – and, by extension, for troubleshooting and remediating problems.
Kubernetes log example
As an example of a Kubernetes log file entry, here's a sample log that records data about a kubelet event.
Sign up for Updates
Get ready to monitor everything
you run on Kubernetes, instantly.
Let’s create your account
to get you started
Leave your email below & we’ll send you a login link so you can complete everything from the comfort of your computer where groundcover works best.
See you on desktop!