The Kubernetes ecosystem is filled with terminology that can sound arcane to the uninitiated. The good news, though, is that the terms are pretty easy to understand once you have insight into the concepts behind them.

Case in point: Kubernetes ImagePullBackOff. At first glance, the term may seem to make little sense. But when you dive into it – as we're going to do in this article – you start to realize that ImagePullBackOff plays a central role in helping Kubernetes admins to manage container images and troubleshoot issues relating to container registries.

What is ImagePullBackOff Error and Why Does it Occur?

Kubernetes ImagePullBackOff error is a status that occurs when Kubernetes cannot successfully pull a specified container image. If Kubernetes can't pull the image for a container, the container ends up stuck in the Waiting state.

typical ImagePullBackOff timeline

If the first attempt to pull an image fails, Kubernetes will keep retrying, with increasing lengths of time between each attempt. It gives up if the delay since the last attempt was 5 minutes. This interval-based approach to retrying the image pulls explains why ImagePullBackOff is called what it is: Kubernetes gradually "backs off" of attempts to pull the image. Kubernetes uses a similar approach for other types of issues, like CrashLoopBackOff.

There are several reasons why Kubernetes may not be able to pull an image:

  • The image doesn't exist in the image registry.
  • The image name or image tag that you've defined in the configuration for a container or Pod contains a typo.
  • Networking issues prevent Kubernetes from connecting to the container image registry.
  • The image is stored in a private registry and the secrets that Kubernetes needs to access it are not properly configured.

When you see an ImagePullBackOff error, then, your mission is to figure out which specific issue is causing the failure, then find a way to resolve it.

What is ErrImagePull Error?

When you see a Kubernetes ImagePullBackOff event, you'll probably also notice an error called ErrImagePull. What does that mean, and how does it relate to ImagePullBackOff?

The answer is that ErrImagePull is the error event that Kubernetes (specifically, kubelet) registers when it first fails to pull an image. After that, ImagePullBackOff events are recorded as Kubernetes re-attempts to pull the image at increasingly long intervals of time.

So, ErrImagePull and ImagePullBackOff both reflect the same type of core issue – failure to pull an image. The difference between them is that ErrImagePull signals the failure itself, while ImagePullBackOff is a status indicating that Kubernetes has been trying to pull the image again.

ImagePullBackOff, then, is technically not an error, although it results from an error – specifically, an ErrImagePull error.

Why It’s Important to Address an ImagePullBackOff Error

The reason why you need to detect and react to ImagePullBackOff error events is simple: Until you fix the issue, any containers that experience the event won't start successfully. Instead, as we noted above, they'll be stuck in the Waiting state. Finding and fixing ImagePullBackOff events is a critical part of any Kubernetes observability and Kubernetes monitoring strategy.

That might be OK if you don't care about actually running applications in Kubernetes. But if you're reading this, we're betting you want your containers to run, and an ImagePullBackOff error will prevent them from doing so successfully.

On top of that, fixing ImagePullBackOff issues is important because they may be the sign of a larger problem that will affect not just an individual container, but your entire environment. For example, if a networking issue is preventing Kubernetes from communicating with your container registry effectively, you'll probably want to sort that out before your entire hosting stack comes crashing down.

How Container Images Work in Kubernetes

Now that you know what ImagePullBackOff means and why it's important to address it, let's talk about strategies for fixing this type of issue. To do this, it's first necessary to go over how the basics of container images work in Kubernetes.

illustration representing a container image and outlining how container images work in Kubernetes

As you may know if you have experience working with containers, a container image is a file that contains the code necessary to run an application. When you want to run a container in Kubernetes, then, you need to specify where the image file for the container is located. Kubernetes doesn't have a way of locating images on its own.

Container images are typically hosted in container registries, which are repositories where developers can upload images for the applications they build. Normally, when you deploy an application in Kubernetes, the configuration for the deployment specifies a container registry to connect to, as well as the specific location of the given image within that registry. The configuration may also define an image tag, which refers to a specific version of an image.

As long as all of your configuration data is correct and Kubernetes can successfully connect to the registry, it will automatically download – or "pull," to use the technical term – the container image, then use it to start a container. But if it can't pull the image for some reason, you get an ErrImagePull error and ImagePullBackOff status.

For the sake of completeness – and to satiate those of you who may be reading this and thinking "but you don't always need to use a registry!" – let us note that there are alternative ways to work with container images in Kubernetes. If you don't want to pull images from a registry, you can set the imagePullPolicy to Never when configuring an application, then use a local Docker image with Kubernetes to start your application.

But this approach is uncommon because managing images locally is a lot of work, plus it's difficult to share images efficiently if they’re stored on a local computer or server. If you have lots of images to deploy and you need to make them available across a distributed environment, you'll want to store them in a container registry and configure Kubernetes to pull them from that registry.

How to Pull a Container Image

Assuming you do use a registry, you have several options for pulling the image.

Pull the Image by Name

Pulling an image by name is the simplest approach because you only have to specify the image's name. When you do this, Kubernetes will pull the latest version of the image that is available in the repository. That may be fine, but keep in mind that the latest version may be one that is still experimental, in which case you should instead specify a tag when pulling the image (which we'll cover below).

To pull an image by name alone, create an application manifest that looks like this:

Pull the Image by Name and Tag

If you pull an image by both tag and name, you'll get a specific version of the image. The application manifest for this approach is very similar to one for pulling by name alone; the only difference is that you add a tag (1.2.3 in the example below) when defining the image name:

Keep in mind that the tag for the image you specify has to exist in the registry. Otherwise, the pull will fail; Kubernetes doesn't automatically attempt to use a different image version if it can't locate one with the tag you defined.

Pull the Image by Digest

A third approach to pulling images in Kubernetes is to specify a digest. A digest is a unique identifier that is generated when a container image is created. A digest is similar to a tag in that it tells Kubernetes to pull the specific version of a given image; however, with a digest, the metadata that defines which version to use is immutable, whereas a tag is not.

With tags, then, there’s a chance that the image you’re pulling could change even if your tag does not, because someone could update the image within the registry without changing the tag. But with a digest, any changes to the image will cause the digest to change, too. So, the main reason why you may want to use a digest is that it guarantees you're getting the specific version of a container image that you expect.

To pull an image by digest, simply specify the digest value when creating your application manifest. For example:

In this case, the digest is a sha256 value. To find the digest for an image, check your registry.

Most Common Causes of ImagePullBackOff Errors

Problem Description How to resolve
Incorrect image
Image name, tag and/or
digest value is not properly set.
Make sure image configuration matches the
configuration of the container registry.
container registry
Kubernetes can't access
the container registry.
Ensure that Kubernetes and the registry can
connect over the network. Also make sure the
registry server is up and running properly.
Authentication and
Authorization Issues
Kubernetes can't authenticate
with a private registry.
Make sure the secret for registry access
is properly configured.
Resource Constraints and
Kubernetes Cluster Load
Kubernetes lacks enough
resources to perform the image pull.
Allocate more resources, or wait until cluster
load decreases and try starting the container again.

Now that you know how image pulls work in Kubernetes, let's talk about why they might fail and create ImagePullBackOff errors. The following are the most common reasons.

Invalid Image Name

If you don't specify the right image name or tag, Kubernetes won't be able to find the image.

An incorrect name or tag specification could occur because you have the wrong information; for example, someone might have changed the name for an image within your registry and you’re not aware of the change. But it could also be the result of a simple typo. If your name or tag is off by just a single character, Kubernetes won't be able to locate it properly. (Tangentially, it would be cool if Kubernetes had the ability to auto-correct or intelligently work around typos, but alas, it doesn't.)

Inaccessible Container Registry

If your Kubernetes cluster can't connect to your container registry, you'll get an ImagePullBackOff event.

Failure to connect is most often the result of a networking configuration problem; for example, your registry might be located on a private network that kubelet can't access. But container registry access issues could also result from problems like a server that hosts the registry temporarily crashing.

Authentication and Authorization Issues

If you're pulling an image from a public registry, the registry doesn't have to authenticate or authorize a pull request. But if you're using a private registry, authentication and authorization are necessary. If they fail, the pull request will be rejected.

The most common cause of authentication and authorization issues is improperly configured secrets. Typically, when pulling from a private registry, you specify a secret using the imagePullSecrets tag. If the secret you define either doesn't exist or doesn't contain the information necessary to authenticate properly with the registry, the image pull will fail.

Resource Constraints and Kubernetes Cluster Load

In some cases, a lack of memory and CPU resources for performing an image pull can trigger an ImagePullBackOff. This is rare because image pulls aren't particularly resource-intensive events, but nodes that are very close to 100 percent of resource utilization may run into this problem.

How to Troubleshoot and Fix the ImagePullBackOff Error

To fix ImagePullBackOff errors as part of your Kubernetes troubleshooting routine, start by checking your application manifest. Confirm that the image name, tag and/or digest values are properly configured and that they match the configuration of your container registry.

If you believe everything is appropriately configured, try pulling the image directly from the command line (using the docker image pull command) with the same values that are specified in your application manifest. If this works, you know the image is accessible, and that the root cause of the problem lies somewhere in Kubernetes. In this case, lack of available resources or networking configuration issues are your most likely culprit.

If you can't pull the image either from Kubernetes or directly from the CLI, you most likely have an issue with your registry. In that case, make sure the server that hosts the registry is up, has sufficient resources available to it and has network connectivity that allows it to communicate with your Kubernetes nodes. If all else fails, a generic mitigation like restarting the container registry might help.

Key Takeaways

To sum up, ImagePullBackOff errors are a common occurrence in Kubernetes, and they can happen for a variety of reasons. When you run into this type of error, work through the various potential causes for Kubernetes to be unable to pull your container, starting with the most common – mistakes in the configuration that defines the image name and location.

In more complex situations, you may find that the reason Kubernetes can't pull an image has to do with your networking configuration, resource allocations to servers or bugs within your container registry. Fortunately, those problems are less common than misconfigured image names, tags or digests.

Check out our Kubernetes Troubleshooting Guide for more errors -->


Here are answers to common questions about CrashLoopBackOff

How do I delete CrashLoopBackoff Pod?

To delete a Pod that is stuck in a CrashLoopBackOff, run:

kubectl delete pods pod-name

If the Pod won't delete – which can happen for various reasons, such as the Pod being bound to a persistent storage volume – you can run this command with the --force flag to force deletion. This tells Kubernetes to ignore errors and warnings when deleting the Pod.

How do I fix CrashLoopBackoff without logs?

If you don't have Pod or container logs, you can troubleshoot CrashLoopBackOff using the command:

kubectl describe pod pod-name

The output will include information that allows you to confirm that a CrashLoopBackOff error has occurred. In addition, the output may provide clues about why the error occurred – such as a failure to pull the container image or connect to a certain resource.

If you're still not sure what's causing the error, you can use the other troubleshooting methods described above – such as checking DNS settings and environment variables – to troubleshoot CrashLoopBackOff without having logs.

Once you determine the cause of the error, fixing it is as easy as resolving the issue. For example, if you have a misconfigured file, simply update the file.

How do I fix CrashLoopBackOff containers with unready status?

If a container experiences a CrashLoopBackOff and is in the unready state, it means that it failed a readiness probe – a type of health check Kubernetes uses to determine whether a container is ready to receive traffic.

In some cases, the cause of this issue is simply that the health check is misconfigured, and Kubernetes therefore deems the container unready even if there is not actually a problem. To determine whether this might be the root cause of your issue, check which command (or commands) are run as part of the readiness check. This is defined in the container spec of the YAML file for the Pod. Make sure the readiness checks are not attempting to connect to resources that don't actually exist.

If your readiness probe is properly configured, you can investigate further by running:

kubectl get events

This will show events related to the Pod, including information about changes to its status. You can use this data to figure out how far the Pod progressed before getting stuck in the unready status. For example, if its container images were pulled successfully, you'll see that.

You can also run the following command to get further information about the Pod's configuration:

kubectl describe pod pod-name

Checking Pod logs, too, may provide insights related to why it's unready.

For further guidance, check out our guide to Kubernetes readiness probes.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.

We care about data. Check out our privacy policy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.