Aviv Zohari's profile Image
Aviv Zohari
,
Founding Engineer
10
minutes read,
May 30th, 2023

You've probably heard that the pillars of observability are logs, metrics, and traces – all of which provide critical insight into what's happening in a complex system.

That's true, but when it comes to Kubernetes, there's another key type of data source that drives effective monitoring and observability workflows: Events. Kubernetes events happen when various types of changes take place within a cluster. Tracking events and correlating them with performance or other issues is a great way to help figure out what's happening inside your Kubernetes environment.

So, without further ado, let's talk about what every K8s admin should know about Kubernetes events, including what they are, how they're generated, which types of events exist in Kubernetes, and how to get the most out of events as part of a Kubernetes monitoring and observability strategy.

What are Kubernetes events?

Kubernetes events are information that Kubernetes generates in response to specific types of actions or changes that take place within Kubernetes – such as changes to the Pod's state or the scheduling of a Pod on a node.

Kubernetes generates events when a request or state change is successful, as well as when a failure occurs. Thus, events are useful not just for identifying problems, but also for gaining critical context that helps you understand the overall health and status of your cluster and applications.

It's important to note that not every single action or change in Kubernetes triggers an event. Events don't track which commands you've run using kubectl, for example, or record network requests. Instead, the main purpose of events is to provide a mechanism for reporting changes to the state or configuration of resources in Kubernetes. Events aren't designed for tracking user activity or detecting security risks (audit logs may be useful for that purpose). 

At a high level, there are three main types of events that Kubernetes tracks: Configuration changes, scheduling events, and state changes.

Main types of Kubernetes events

Event type Purpose Example
Configuration changes Report changes to resource configurations Pod memory allocation changed
Scheduling Track scheduling occurrences Pod scheduled on node
State changes Report changes to resource state Pod changes from pending to succeeded state

Configuration changes

When you change certain aspects of a resource's configuration in Kubernetes, it records the change as an event. For example, if you change the memory allocated to a Pod, you'll see the change as an event.

Scheduling

Scheduling events, like the assignment of a Pod to a particular node, generate events. So do scheduling errors, like failure to find a node that can handle a new Pod.

State changes

When the state of various types of Kubernetes resources changes, an event happens. Examples of state changes include the creation of a new Pod, Pod evictions from nodes, and the transition of Pods from one state (like pending) to another (like succeeded).

Kubernetes events vs. logs

If you're thinking, "Hmmm, Kubernetes events sound a lot like the type of data recorded in log files," you're not wrong. Events in Kubernetes are indeed comparable to the information you'd find in a typical application or operating system log, which also usually registers changes like updates to an application's state or various types of failures.

However, a key difference between events and logs is that Kubernetes doesn't actually record event data in a specific place. In other words, it doesn't compile events into logs that you can then go and analyze as you would any other type of log file.

You can collect events and write them to a log file of your own choosing with help from third-party tools (like a Grafana data collector). But you can't simply pull events out of Kubernetes logs, because Kubernetes itself doesn't log events.

This is why events don't fit neatly into the logs-metrics-traces paradigm of modern observability. They're sort of like log data, but they're also totally not logs, and if you want to leverage events as part of a Kubernetes observability strategy, you need to think beyond conventional approaches to observability.

Types of Kubernetes events

We told you above that there are three main types of high-level Kubernetes events: Configuration changes, scheduling requests, and state changes. However, events within each of these three domains can be broken down into distinct categories of events. Events in each category provide different types of information, which you'll want to handle in different ways.

Normal events

First off, there are what we like to call "normal" events. These are events that signal routine changes or developments – like a Pod being created or transitioning as expected between stages as it starts up.

Normal events don't signal a problem that you need to react to. However, monitoring normal events is important nonetheless because you may be able to correlate these events with other developments to help troubleshoot issues. For example, if you can't reach a Pod from the network and you want to figure out why, viewing events that show the Pod successfully started up will allow you to rule out problems with Pod state changes that could explain the issue. You'd know instead that you should focus on other possible root causes, like issues with the Pod's network configuration.

Failed events

A failed event is any event that involves something failing to happen as expected – like failure to pull a container image or failed scheduling events.

In some cases, a failed event may not signal a major issue because Kubernetes may be able to recover. For instance, if Kubernetes couldn't pull a container issue due to a temporary networking glitch, it will retry, and may succeed on the second attempt. In that instance, you can likely ignore the failed event (unless you see this type of event happening on a recurring basis, in which case you'd probably want to figure out why Kubernetes keeps having intermittent issues pulling images).

But in most cases, failed events are something to pay attention to. Unless you can confirm that Kubernetes succeeded in completing whichever action initially resulted in failure, you should troubleshoot and fix the failure to ensure the stability of your workloads and clusters. Otherwise, you're likely to find that your applications are not running because your Pods can't be successfully scheduled or keep crashing, for example.

Eviction events

Evicted events occur whenever Kubernetes evicts a Pod from a node. Eviction means that Kubernetes stops the Pod. Typically, Kubernetes will reschedule the Pod on a different node following eviction.

Evictions are not necessarily a sign of a problem because some evictions occur under normal, expected conditions. For instance, Kubernetes might evict a Pod from one node and reschedule it on another in order to balance workloads across nodes more efficiently. Or, Pod eviction may occur as the result of a request made by a Kubernetes admin to stop a Pod or drain a node (which means removing running Pods from the node) so the node can shut down.

But in other circumstances, eviction events are associated with problems. In particular, if Kubernetes evicts a Pod because the node hosting it doesn't have enough resources to keep operating in a stable way, that's a problem. It typically means you need to add more nodes to your cluster and/or shut down some unused Pods in order to free up resources.

So, while eviction events don't require a response in every case, you should always track them to determine whether the eviction is a sign of a problem you need to deal with.

Node events

Node events are (you guessed it) events involving nodes. This category of events overlaps with the categories we discussed above, since any normal change, failure, or eviction event may also involve a node.

So, why do we list node events as their own category? Because keeping track of node events is a useful way of monitoring overall node status and health. If you notice a lot of failures involving a particular node, for instance, it's a sign that you may want to investigate why that node is behaving poorly.

Volume events

Volume events record changes related to storage volumes, such as creating or deleting volumes. Like node events, events related to volumes are worth tracking as their own category because they can provide important insight into the health and performance of the storage resources that your Kubernetes workloads depend on.

Accessing Kubernetes events

The simplest way to access Kubernetes events is using the following kubectl events command:

Or, to specify a namespace, use the -n flag:

The output displays events in the following format:

Here, the event we're seeing is a Pod named nginx shutting down.

How to store Kubernetes events

Importantly, by default most Kubernetes distributions store event data only for 60 minutes. After that, it's gone forever from Kubernetes.

For that reason, you may want to install a data collector that automatically performs event collection and stores events outside of Kubernetes, where you can view and retain data for as long as you want. You can use tools like Grafana eventhander_config for this purpose.

Why doesn't Kubernetes show any events?

If the kubectl events command reports that there are no events to display, there are two likely causes.

One is that you're looking at only a particular namespace and there are no events to show for that namespace. In the cast, try running:

to list events from all namespaces. Or, as we mentioned, use kubectl get events -n some-namespace to specify events for a particular namespace.

The other possible issue is that you have no recent events in your cluster. As we mentioned, most Kubernetes distributions delete events by default after 60 minutes – so if no events have occurred in the past hour, you'll get no output from kubectl get events.

If you suspect this is the issue, try triggering an event by, for example, creating a new Pod or deleting an existing one. You should then see the event reported by kubectl.

Filtering Kubernetes events

By default, kubectl get events shows all the events for the selected namespace (or all namespaces, if you use the --all-namespaces flag). If you want to filter events so you can focus on a specific event type (like failures) or specific resources (like a particular Pod or node), there are a few ways to do so.

Kubectl --field-selector

One is to use the --field-selector flag when pulling events data. This tells kubectl to display events that match only the fields you specify.

For example, if you want to view only events related to a Pod named, nginx, you could run:

Grep

Another method is to pipe the output of kubectl get events to grep, then use grep to filter for whichever event types you want to focus on. For instance, this command would also display only events that mention nginx:

This is a cruder method than using --field-selector, but you may find it easier if you're more familiar with old-school Unix tools like grep than you are with kubectl.

Data collector filters

The third way to filter events is to collect events data using a data collector and configure the collector to record only certain types of information. The exact way to do this will depend on which data collector you use.

How to monitor Kubernetes events with groundcover

If tracking and interpreting events data by hand doesn't sound like a fun time, consider groundcover, which automatically tracks events along with all of the other data you need to implement an effective Kubernetes tracing, logging, and metric tracking strategy.

By comprehensively tracking and correlating all types of Kubernetes observability insights, groundcover helps you identify events and other issues that require your attention, and then trace problems quickly to their root cause.

Conclusion

In the context of observability, Kubernetes events are something of an oddball. They don't behave exactly like logs, metrics or traces, but they do provide a critical fourth dimension of visibility into your clusters and applications.

So, love them or hate them, Kubernetes events are something you'll want to track closely to clusters running smoothly.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.