Kubernetes Error Codes: Container Not Found

Kubernetes Error Codes: Container Not Found

{{text-cta}}

Because Kubernetes is a complex system, you will occasionally deal with your applications not deploying correctly or not working as planned. Solving such problems, though, can be difficult.

Troubleshooting issues in a Kubernetes cluster requires a clear understanding of Kubernetes and its error codes. This will reduce the time invested in debugging and correcting issues in order to achieve a faster, more precise debug mechanism.

In this article, you will learn what it means to get a `container not found` error code during your Kubernetes deployment and how that error can be resolved.

Requirements

You’ll need the following prerequisites to use this guide:

- You should have a good knowledge of Kubernetes.

- You’ll need some knowledge of the kubectl command line tool for orchestrating Kubernetes.

- You should have a Kubernetes cluster running. You can use minikube to experiment with Kubernetes locally by creating a virtual cluster on your personal hardware.

What Is `container not found`?

Containers provide a standardized format for packaging and shipping all the components needed to run the containerized application. They eliminate the issue of the same application behaving differently in different environments by ensuring that every time the container is deployed, the containerized application runs in a consistent environment.

Managing a fleet of containers can be tedious and prone to errors. This is why Kubernetes was created. It helps you manage your containers and keep them running according to the desired state you have configured for the container.

In some instances, though, Kubernetes can’t get the objects to match the desired state configured and it throws an error code indicating what went wrong. The `container not found` error code generally indicates that kubectl can’t establish communication with the pod or container you are trying to interact with. This could be because the pod isn’t ready to receive commands yet or because it doesn’t exist in the node.

For example, if the pod scheduled to run on your node fails to start due to a network error during the scheduling process, when you try to orchestrate the pod with kubectl, you’ll get a `container not found` error code.

How Do You Solve `container not found`?

To resolve `container not found`, you need to identify what pod the container is running in. Pods are the smallest unit of deployment in Kubernetes. Kubernetes can’t directly run containers; instead, one or more containers are scheduled to run on a pod. When debugging deployment issues with a container, it makes sense to interact directly with its pod since that’s the closest unit of deployment to the container. 

Once you identify the pod, you can determine whether the pod is in a state where it is ready to accept connections. This could be due to a pod eviction resulting from lack of resources. It could also be because of network issues or because the pod doesn’t exist.

You can explore these issues in more detail below, as well as how they can be resolved.

Get Information about Your Pod

There may be several reasons why your pod is not in the expected state, so you should see what the pod is reporting before you proceed. Gathering more information about the failing pod will give you better insights.

Run the following command to list all the pods across all namespaces:


kubectl get pods –all-namespaces -o wide

Kubectl commands would be run against objects in the default namespace. The pod you are interested in may exist in a different pod, or you might want to see all the pods at a glance. Adding the `–all-namespaces` flag to the command instructs kubectl to list the pods available across all namespaces. Passing the `-o wide` flag prints the pods’ data with more information instead of just the default.

If you find the failing pod on the list, use the `kubectl describe pod <pod-name>` command to get more data. You can use `kubectl describe pod <pod-name>` to retrieve detailed information about your pod.

{{text-cta}}

For example, you can get more information about the pod `nginx` with the below command:


$ kubectl describe pod nginx

Name:         nginx
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Sat, 22 Jan 2022 13:06:31 +0100
Labels:       run=nginx
Annotations:  >none<
Status:       Running
IP:           172.17.0.4
IPs:
  IP:  172.17.0.4
Containers:
  nginx:
    Container ID:   docker://80d91c0c73cd00976fd3f6edcc80b62eb1511a6b16eb70558b9049dd6a50bd75
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767
    Port:           >none<
    Host Port:      >none<
    State:          Running
      Started:      Wed, 02 Feb 2022 00:40:05 +0100
    Ready:          True
    Restart Count:  3
    Environment:    >none<
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9snkg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-9snkg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       >nil<
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              >none<
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  SandboxChanged  5m26s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulling         5m12s  kubelet  Pulling image "nginx"
  Normal  Pulled          4m54s  kubelet  Successfully pulled image "nginx" in 17.857226691s
  Normal  Created         4m46s  kubelet  Created container nginx
  Normal  Started         4m40s  kubelet  Started container nginx

You can also print the pod’s configuration in a YAML format using the command below:


$ kubectl get pod nginx -o yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-01-22T12:06:31Z"
  labels:
    run: nginx
  name: nginx
  namespace: default
  resourceVersion: "141869"
  uid: e5c84d32-6ab5-4421-b6b8-f0c549804459
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-9snkg
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: minikube
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-9snkg
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-01-22T12:06:31Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-02-01T23:40:07Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-02-01T23:40:07Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-01-22T12:06:31Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://80d91c0c73cd00976fd3f6edcc80b62eb1511a6b16eb70558b9049dd6a50bd75
    image: nginx:latest
    imageID: docker-pullable://nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767
    lastState: {}
    name: nginx
    ready: true
    restartCount: 3
    started: true
    state:
      running:
        startedAt: "2022-02-01T23:40:05Z"
  hostIP: 192.168.49.2
  phase: Running
  podIP: 172.17.0.4
  podIPs:
  - ip: 172.17.0.4
  qosClass: BestEffort
  startTime: "2022-01-22T12:06:31Z"

From the output of both commands, you can gather more insights into the lifecycle of your pod. You should be able to determine whether the container crashed or whether there were network issues, which should help you quickly spin up the deployment again.

Determine the container status by checking the status property of the output from the `kubectl describe pod <pod-name>` command and the phase property of the `kubectl get pod <pod-name> -o yaml` command.

You should expect the status to have a state of running that denotes that the containers in the pod have been mounted and properly bound to a node in the cluster.

If the status is pending, then the container has not been set up and is not ready to run. During this phase, the pod could be waiting to be scheduled, or it’s busy fetching the container images and needs time to finish.

An unknown status indicates that the pod’s status could not be obtained. This typically occurs because of an error in communicating with the node where the pod is expected to be running. You can redeploy the object to fix this.

Check the logs using the `kubectl logs nginx` command to look for anything that might help. Watch out for pod crashes, network issues, and other Kubernetes error codes like `imagepullbackoff`.

The command below displays the logs for an `nginx` pod:


kubectl logs nginx

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/02/01 23:40:05 [notice] 1#1: using the "epoll" event method
2022/02/01 23:40:05 [notice] 1#1: nginx/1.21.6
2022/02/01 23:40:05 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/02/01 23:40:05 [notice] 1#1: OS: Linux 5.13.0-27-generic
2022/02/01 23:40:05 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/02/01 23:40:05 [notice] 1#1: start worker processes
2022/02/01 23:40:05 [notice] 1#1: start worker process 31
2022/02/01 23:40:05 [notice] 1#1: start worker process 32
2022/02/01 23:40:05 [notice] 1#1: start worker process 33
2022/02/01 23:40:05 [notice] 1#1: start worker process 34

You should have a similar output representing all the events that took place on the pod and their timeframe.

Does the Pod Exist?

Checking against this before applying can save you a lot of debugging time. There are times when you are specifying a pod that doesn’t exist or the name was not specified correctly.

One way to determine if the pod exists is by listing all pods on the cluster. You’ll see details on the pod status and its ready state.

Run this command to see the list of pods running on your node:


$kubectl get pods

NAME                          READY   STATUS               RESTARTS         AGE
hello-go-57d96c569-56wwc      0/1     ImagePullBackOff     0                11d
hello-go-57d96c569-fvqgm      0/1     ImagePullBackOff     1 (108d ago)     110d
hello-go-57d96c569-l9dgq      0/1     ImagePullBackOff     1 (108d ago)     110d
hello-go-57d96c569-m96zr      0/1     ImagePullBackOff     1 (108d ago)     110d
nginx                         1/1     Running              3                10d
webserver-7fb7fd49b4-g4vvr    1/1     Running              6 (27m ago)      11d
webserver-7fb7fd49b4-qjgft    1/1     Running              6 (27m ago)      11d
webserver-7fb7fd49b4-whkb9    1/1     Running              6                11d

As you can see above, the command returns all pods running on your cluster along with details you need to debug this error code, like the pod’s [status](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#podstatus-v1-core) and its [ready status](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) that indicates if the pod is ready to serve requests. Check the output of this command and see if it includes the pod you are running your command against.

Does the Pod Exist in Another Namespace?

Kubectl commands would run against objects in the default namespace if there is no namespace specified. If your pod exists in another namespace, be sure to specify the namespace in your command by using the `--namespace` flag.

For example, you can retrieve the pods running in the `nginx` namespace using the following:


kubectl get pods --namespace nginx

Passing the `--namespace nginx` flag tells kubectl to get the pods that exist in the `nginx` namespace.

If you are unsure of the namespace, you can get a list of all namespaces that exist in your cluster using the command below:


kubectl get namespace

Otherwise, you can simply retrieve a list of pods in all namespaces.

Append the `--all-namespaces` flag to the previous command, as shown below:


kubectl get pods --all-namespaces

Conclusion

Kubernetes is complex, and the same is true for debugging it. As you’ve seen, you have multiple options for diagnosing a problem. The `container not found` error code may initially be frustrating, but using these methods, you’ll be better able to debug and redeploy your application.

Learn from Nana, AWS Hero & CNCF Ambassador, how to enforce K8s best practices with Datree

Watch Now

🍿 Techworld with Nana: How to enforce Kubernetes best practices and prevent misconfigurations from reaching production. Watch now.

Headingajsdajk jkahskjafhkasj khfsakjhf

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Reveal misconfigurations within minutes

3 Quick Steps to Get Started