Kubernetes Error Codes: Failed to Pull Image

Kubernetes Error Codes: Failed to Pull Image

{{text-cta}}

Kubernetes is a complex system with many moving parts. Successfully pulling an image and starting a new pod of containers requires several components to work in parallel. This means that errors can—and will—occur, so it’s important that you’re equipped to deal with them to keep your cluster running.

Kubernetes error codes can seem opaque to newcomers because they demand a general understanding of core cluster concepts and how Kubernetes works. Most errors are self-explanatory once you have that knowledge, but they can still be tricky to diagnose and resolve.

Learning how to respond to errors ahead of time helps you pinpoint the problems in your cluster and address them with minimal downtime. Being able to anticipate possible errors makes debugging faster, easier, and more precise. You’ll also gain insights into how Kubernetes works that will help you avoid hitting the same error in the future.

In this article, we’ll take a look at the `Failed to pull image` error code and explore some methods to go about resolving it.

What Does `Failed to Pull Image` Mean?

You’ll get a `Failed to pull image` error when Kubernetes tries to create a new pod but can’t pull the container image it needs in order to do so. You’ll usually see this straight after you try to apply a new resource to your cluster using a command like `kubectl apply`. After a while, you may realize the pod isn’t running, and when you inspect it with `kubectl describe pod/my-pod`, you’ll see the error in the Events table.

Pull errors originate from the nodes in your cluster. Each node’s Kubelet worker process is responsible for acquiring the images it needs to service a pod scheduling request. When the node is unable to download an image, it reports the status back to the cluster control plane.

It’s possible that some nodes in your cluster will be able to pull images while others are stuck with failures. This happens because there are many different reasons why an image might fail to download. These reasons range from basic network connectivity issues to problems such as an invalid tag reference or missing registry authentication. Irrespective of the underlying cause, the effect is the same: your pod won’t be able to start until the image is available to the node.

Image pull errors usually cause a pod’s status in `kubectl get pod/my-pod` to show as [`ImagePullBackOff`](https://kubernetes.io/docs/concepts/containers/images/#imagepullbackoff). This status means Kubernetes is trying to pull the image referenced in the pod’s `spec.containers.image` field but consecutive attempts have failed. It’s “backing off” by waiting a while before trying again.

When you see `ImagePullBackOff` in a pod’s status field, a `Failed to pull image` error will be the root cause. Once you’ve addressed the problem that’s blocking the image download, Kubernetes should successfully complete the pull next time it tries. This will clear the `ImagePullBackOff` status and allow pod creation to proceed.

Solving Image Pull Errors

Fortunately, image pull errors are one of the easier kinds of Kubernetes issues to resolve. Being methodical in their elimination is usually the best way to go about fixing a pull problem. Let’s explore a few important steps you can use to troubleshoot.

Check the Pod’s Image

The first thing you should do is check the pod’s `image` field and look for basic mistakes—typos can happen to anyone.


apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
	- name: nginx
  	image: ngnx:1.21    	 # Typo!

If there are no typos, make sure the complete image tag actually exists in your registry. A tag that’s been deleted from the registry (or one that was never there to begin with) won’t be retrievable. You’d need to change the `image` field to a different tag instead.

You should also check to be sure that your image tag includes the registry URL when one is required. Unqualified image tags such as `demo-image:latest` will be pulled from the public Docker Hub, which might not be what you intended. You must use the `registry.example.com/demo-image:latest` format to ensure Kubernetes pulls the image from the `registry.example.com` registry.

All fixed? Apply your changes to your cluster with Kubectl:


kubectl apply -f my-pod.yaml

The old pod will get replaced with the new definition, including the updated image tag. Your nodes should be able to successfully pull that image, enabling pod provisioning to proceed to container creation.

Accessing a Private Image

Another common cause of image pull errors occurs when you’re using a private registry. Kubernetes needs to be [given credentials](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry) it can use to authenticate to the registry. Without them, pulls will be unsuccessful; you’ll see a `pull access denied` message as a part of the `failed to pull image` error:


Failed to pull image "registry.example.com/private-image:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.example.com/private-image:latest, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

Credentials are injected into the cluster environment as Kubernetes [secrets](https://kubernetes.io/docs/concepts/configuration/secret). A registry login secret is a special type called `dockerconfigjson` that mirrors the `config.json` files used by the Docker CLI.

Here’s an example of a Kubernetes secret for authenticating to `registry.example.com`:


apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
  name: image-pull-secret
data:
  .dockerconfigjson: {{ "{\"auths\": {\"registry.example.com\": {\"username\": \"example-user\", \"password\": \"a1b2c3d4e5f6\"}}}" | b64enc }}

Secrets that contain registry authentication data *must* have a `.dockerconfigjson` field within their data. This needs to be a Base64-encoded JSON object with an `auths` top-level property. Within this property, you pair registry hostnames with their respective credentials.

Apply your secret to your cluster using Kubectl:


kubectl apply -f image-pull-secret.yaml

Now you must update your pod’s manifest to instruct Kubernetes to use the created secret when fetching the image:


apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
	- name: example
  	image: registry.example.com/example-image:latest
  imagePullSecrets:
      - name: image-pull-secret

The `spec.imagePullSecrets` field is used to reference `dockerconfigjson` secrets that will be made available to the Kubelet process on nodes that schedule the pod. The snippet above instructs Kubelet that it can find registry credentials inside the secret called `image-pull-secret`; this matches the name of the secret created earlier.

Apply the updated pod manifest to your cluster:


kubectl apply -f my-pod.yaml

The next pull attempt should now succeed. Kubernetes will be able to authenticate to the registry and download the image.

Beware that variations of the `pull access denied` error may also appear when you’re using authentication but specifying an invalid username or password. In this case, you should check the pod’s `imagePullSecrets` to make sure it’s referencing the correct secret. Next, ensure that the JSON snippet within that secret contains a working credential pair.

{{text-cta}}

Registry Connectivity Issues

Registry outages can also stop you from pulling images. While it’s easy to assume that major public services will always be up, that’s not necessarily the case in practice. When you’re sure your `image` field is valid and correct credentials are available, it’s time to start probing other areas for issues.

Try pinging the registry from outside your cluster. Any failure to connect could mean the problem is at your registry provider, not your Kubernetes cluster.

In this circumstance, you might be able to use a workaround to resolve the image pull error and start your containers. Kubernetes supports several different image pull policies that determine when nodes should attempt to pull images. The following policies are offered:

- **`IfNotPresent`** - The image is only pulled if it’s not already available on the node. This is the default policy when you’ve specified an image tag and it’s not `latest`.

- **`Always`** - Instructs Kubernetes to always try and pull the image. This is the default policy when you’re using the `latest` tag.

- **`Never`** - Prevents Kubernetes from ever pulling the image; it must already be available on the node via another mechanism.

Problems can arise when your pod is set to use the `Always` policy, whether explicitly with the `spec.containers.imagePullPolicy` field or as the default for the `latest` tag. Kubernetes needs to consult the registry to determine whether the image content has changed. When the registry is offline, this won’t succeed and a pull error will occur. Temporarily changing the image pull policy to `IfNotPresent` will bring your containers up *if* an image with an identical tag is already available on the node.

Add or edit the field in your pod’s YAML:


apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
	- name: nginx
  	image: nginx:1.21
  	imagePullPolicy: IfNotPresent

Then apply the updated manifest to your cluster:


kubectl apply -f my-pod.yaml

Kubelet will now be able to reuse the existing image version, letting your containers start. You can revert the change once your registry’s back online, putting you back to pulling the latest image updates on each deploy.

Other Image Pull Error Causes

Still debugging? There are a few other reasons why images can fail to pull. Your node’s networking could have failed or there might be wider cluster connectivity issues. If you’re online, the registry’s up, and pull errors are persisting, you could be running into firewall or traffic filtering issues.

One final possibility is hitting rate limits imposed by public registry providers. Docker Hub [now limits you to](https://www.docker.com/increase-rate-limits) one hundred image pulls every six hours, or two hundred pulls every six hours if you supply credentials. All subsequent pulls will fail after your allowance gets used up, in which case your only recourse is to wait for the cap to expire.

This can be a follow-on symptom from the causes described above. If you had a typo in your image tag that caused Kubernetes to keep retrying the download, you might hit the rate limit before you’re able to fix the tag. You’d then need to wait for the rate limit to reset before your cluster could pull the corrected image tag.

Conclusion

Understanding Kubernetes error codes goes hand-in-hand with understanding Kubernetes itself. In the case of `Failed to pull image`, the cluster is reporting a problem where one or more of its nodes have accepted a pod scheduling request but have been unable to acquire its image.

You can fix this problem by making basic checks to ensure your YAML contains a valid image reference with no typos and a correct tag. You should then move on to confirm that the cluster has access to the registry credentials and isn’t facing a network outage.

As this error is often the result of simple mistakes, you might want to consider using automation to prevent recurrences in the future. [Datree.io](https://www.datree.io) is a tool that can validate your YAML files against custom schemas and highlight any misconfigurations. You could use it to check that `image` fields only reference images with specific registries and tags, lowering the chance of accidentally starting a pod with an invalid image.

Learn from Nana, AWS Hero & CNCF Ambassador, how to enforce K8s best practices with Datree

Watch Now

🍿 Techworld with Nana: How to enforce Kubernetes best practices and prevent misconfigurations from reaching production. Watch now.

Headingajsdajk jkahskjafhkasj khfsakjhf

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Reveal misconfigurations within minutes

3 Quick Steps to Get Started