Kubernetes Troubleshooting - Fixing Pod Has Unbound Immediate PersistentVolumeClaims Error

Kubernetes Troubleshooting - Fixing Pod Has Unbound Immediate PersistentVolumeClaims Error

{{text-cta}}

Setting up a single kubernetes cluster on a single server is really easy from a maintenance point of view because you really do not need to worry about Persistent Volume (PV), Persistent Volume Claim (PVC), PODs deployed under different zones (ex. - eu-north-1, us-west-1 etc..)

The issue is not about different zones, but when you start scaling your Kubernetes cluster setup for your production environment, you need to take care many other elements (Databases, Persistent Volume Size, Mapping between the Persistent Volume and Persistent Volume claim)

In this blog we'll cover different types of Kubernetes configuration errors associated with Zones, PV, PVC:

  1. Kubernetes Pod Warning: 1 node has volume node affinity conflict
  2. Insufficient Capacity Size or Resource
  3. The accessModes of your Persistent Volume and Persistent Volume Claim are inconsistent
  4. The number of PersistentVolume is greater than PersistentVolumeClaims

1. Kubernetes Pod Warning: 1 node has a volume node affinity conflict

To talk about the volume node affinity conflict error, first let us take an example to understand this Kubernetes cluster setup:

  1. You have set up a Kubernetes cluster running in AWS in the eu-north-1 Frankfurt  region
  2. For the same Kubernetes cluster, you have defined PV (Persistent Volume) and PVC (Persistent Volume Claim) but in an alternate AWS region eu-west-1Ireland
  3. Now you are trying to schedule a Pod in eu-north-1 Frankfurt using the PV and PVC which are present in eu-west-1 Ireland region
  4. So whenever you try to schedule a Pod in different zone, the Pod status will always fail and it will result in - Kubernetes Pod Warning: 1 node has volume node affinity conflict

How to fix this?

The first point for troubleshooting would be to check how many zones you are using for setting up your Kubernetes cluster and in which zone you have defined your persistent volume (PV) and persistent volumes claim (PVC)

Verification of the zones can easily be done by a web portal provided by the cloud services provider.

But for the verification of PV and PVC, you must run the following command:

$ kubectl get pv
$ kubectl describe pv
$ kubectl get pvc
$ kubectl describe pvc

Fix 1 : Delete the PV and PVC

To fix the issue you can delete the PV and PVC from the other zone and recreate the same resources in the same zone where you are trying to schedule the Pod.

Here are the commands for deleting the PV and PVC:

$ kubectl delete pvc my-test-pvc
$ kubectl delete pv my-test-pv

Now after deleting the PV and PVC, recreate them in the same zone where you're running your Pod:

$ kubectl apply -f my-test-pv.yaml
$ kubectl apply -f my-test-pvc.yaml

Now after creating the above PV and PVC, you can schedule the Pod or create your Kubernetes deployment in the same zone and your error Kubernetes Pod Warning: 1 node has volume node affinity conflict should be fixed.

Fix 2: Move the Pod to the same zone along with the PV and PVC

Since this issue is all about PV, and PVC running in different zones, you can move the Pod to the same zone where we have already created the PV and PVC.

Let’s assume your Deployment is running in ZONE-1 and your PV and PVC are running in ZONE-2, so first start by deleting the Deployment from ZONE-1:

$ kubectl delete deployment my-deployment

Create a new deployment after switching to the new ZONE-2:

$ kubectl apply -f my-deployment.yaml

Once you create a successful deployment in ZONE-2 along with PV and PVC your issue of Kubernetes Pod Warning: 1 node has volume node affinity conflict should be fixed.

(Note* - If you are using the storage class then it is always recommended to create the storage class and deployment in the same Pod.)

2. Insufficient Capacity Size or Resource

The next error we'll cover is unable to locate a PV with sufficient capacity. As the name suggests it is a problem associated with PVC (Persistent Volume Claim) where you have defined the storage more than the PV (Persistent Volume).

Here is an example configuration to understand this issue:

1. Create a Persistent Volume (PV) with 1Gi of storage


apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
spec:
capacity:
  storage: 1Gi
volumeMode: Filesystem
accessModes:
  - ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
  path: /home/vagrant/storage
nodeAffinity:
  required:
    nodeSelectorTerms:
      - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
              - node1

2. Create Persistent Volume Claim (PVC) with 3Gi of storage (here we are setting the storage to 3Gi more than the Persistent Volume)


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
volumeName: test-pv
storageClassName: local-storage
volumeMode: Filesystem
accessModes:
  - ReadWriteOnce
resources:
  requests:
    storage: 3Gi

3. Now, first apply the PV (persistent volume) configuration:

$ kubectl apply -f test-pv.yml


persistentvolume/test-pv created

$ kubectl apply -f test-pv.yml


Name:          	test-pv
Labels:        	<none>
Annotations:   	<none>
Finalizers:    	[kubernetes.io/pv-protection]
StorageClass:  	local-storage
Status:        	Available
Claim:
Reclaim Policy:	Retain
Access Modes:  	RWO
VolumeMode:    	Filesystem
Capacity:      	1Gi
Node Affinity:
  Required Terms:
	Term 0:    	kubernetes.io/hostname in [node1]
Message:
Source:
	Type:  LocalVolume (a persistent volume backed by local storage on a node)
	Path:  /home/vagrant/storage
Events:	<none>

4. After that, apply PVC (Persistent Volume claim configuration with 3Gi):

$ kubectl apply -f test-pvc.yml


persistentvolumeclaim/test-pvc created

5. Now verify the PVC:

$ kubectl describe pvc test-pvc


Name:      	test-pvc
Namespace: 	default
StorageClass:  local-storage
Status:    	Pending
Volume:    	test-pv
Labels:    	<none>
Annotations:   <none>
Finalizers:	[kubernetes.io/pvc-protection]
Capacity:  	0
Access Modes:
VolumeMode:	Filesystem
Used By:   	<none>
Events:
  Type 	Reason      	Age              	From                     	Message
  ---- 	------      	----             	----                     	-------
  Warning  VolumeMismatch  12s (x25 over 6m1s)  persistentvolume-controller  Cannot bind to requested volume "test-pv": requested PV is too small

{{text-cta}}

How to fix this?

To fix the issue, your PVC configuration should always have storage less than or equal to PV’s storage.

So update your test-pvc.yml and change the storage from 3Gi to 1Gi and re-apply the PVC configuration.

1. Delete the old PVC:

$ kubectl delete pvc test-pvc


persistentvolumeclaim "test-pvc" deleted

2. Re-apply updated PVC:

$ kubectl apply -f test-pvc.yml


persistentvolumeclaim/test-pvc created

3. Verify the status of the PVC:

$ kubectl describe pvc test-pvc


Name:      	test-pvc
Namespace: 	default
StorageClass:  local-storage
Status:    	Bound
Volume:    	test-pv
Labels:    	<none>
Annotations:   pv.kubernetes.io/bind-completed: yes
Finalizers:	[kubernetes.io/pvc-protection]
Capacity:  	1Gi
Access Modes:  RWO
VolumeMode:	Filesystem
Used By:   	<none>
Events:    	<none>

3. The AccessModes of You Persistent Volume and Persistent Volume Claim are Inconsistent

The next error we'll cover is incompatible accessMode. Whether you create PV (Persistent Volume) or PVC (Persistent Volume Claim) you have set the accessModes for each configuration.

As a rule of thumb, you should always set the same accessMode for both PV (Persistent Volume) or PVC (Persistent Volume Claim). If there is a mismatch in the accessMode, then your PVC (Persistent Volume Claim) will not be able to bound with PV(Persistent Volume).

Let’s take an example:

1. Create a PV (Persistent Volume) with accessMode: ReadWriteMany


apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
spec:
capacity:
  storage: 1Gi
volumeMode: Filesystem
accessModes:
  - ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
  path: /home/vagrant/storage
nodeAffinity:
  required:
    nodeSelectorTerms:
      - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
              - node1

2. Apply the above PV configuration:

$ kubectl apply -f test-pv.yml


persistentvolume/test-pv created

3. Verify the status of test-pv:

$ kubectl describe pv test-pv


Name:          	test-pv
Labels:        	<none>
Annotations:   	<none>
Finalizers:    	[kubernetes.io/pv-protection]
StorageClass:  	local-storage
Status:        	Available
Claim:
Reclaim Policy:	Retain
Access Modes:  	RWX
VolumeMode:    	Filesystem
Capacity:      	1Gi
Node Affinity:
  Required Terms:
	Term 0:    	kubernetes.io/hostname in [node1]
Message:
Source:
	Type:  LocalVolume (a persistent volume backed by local storage on a node)
	Path:  /home/vagrant/storage
Events:	<none>

4. Create a PVC (Persistent Volume Claim) with accessMode: ReadWriteOnce


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
volumeName: test-pv
storageClassName: local-storage
volumeMode: Filesystem
accessModes:
  - ReadWriteOnce
resources:
  requests:
    storage: 1Gi

5. Apply the above PV configuration:

$ kubectl apply -f test-pvc.yml


persistentvolumeclaim/test-pvc created

6. Verify the status of test-pvc:

$ kubectl describe pvc test-pvc


Name:      	test-pvc
Namespace: 	default
StorageClass:  local-storage
Status:    	Pending
Volume:    	test-pv
Labels:    	<none>
Annotations:   <none>
Finalizers:	[kubernetes.io/pvc-protection]
Capacity:  	0
Access Modes:
VolumeMode:	Filesystem
Used By:   	<none>
Events:
  Type 	Reason      	Age           	From                     	Message
  ---- 	------      	----          	----                     	-------
  Warning  VolumeMismatch  7s (x2 over 10s)  persistentvolume-controller  Cannot bind to requested volume "test-pv": incompatible accessMode

How to fix this?

The fix to this problem is pretty simple - you have to use the same access mode as for both PV and PVC.

In our example, let us change the accessMode of PV to ReadWriteOnce and re-apply both the configurations once again.

$ kubectl apply -f test-pv.yml


persistentvolume/test-pv created

$ kubectl apply -f test-pvc.yml


persistentvolumeclaim/test-pvc created

Verify the status of PVC:

$ kubectl describe pvc test-pvc


Name:      	test-pvc
Namespace: 	default
StorageClass:  local-storage
Status:    	Bound
Volume:    	test-pv
Labels:    	<none>
Annotations:   pv.kubernetes.io/bind-completed: yes
Finalizers:	[kubernetes.io/pvc-protection]
Capacity:  	1Gi
Access Modes:  RWO
VolumeMode:	Filesystem
Used By:   	<none>
Events:    	<none>

4. The Number of PersistentVolume is Greater Than PersistentVolumeClaims

The next error which we are going to talk about is how many times we are mapping PV (persistent volume) with PVC (Persistent Volume Claim).

Ideally, you should map single PV (persistent volume) with one PVC (Persistent Volume Claim), but if there are more than one PVC trying to use same PV, you may face the FailedBinding issue. 

Let’s take an example:

  1. Create one PV (Persistent Volume):  test-pv.yml
  2. Create two PVC (Persistent Volume Claim): test-pvc.yml, test-pvc-2.yml 
  3. In both the PVC (test-pvc.yml, test-pvc-2.yml) use the same PV(Persistent Volume) .i.e. test-pv

Here is the test-pv.yml


apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
spec:
capacity:
  storage: 1Gi
volumeMode: Filesystem
accessModes:
  - ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
  path: /home/vagrant/storage
nodeAffinity:
  required:
    nodeSelectorTerms:
      - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
              - node1

Here is the First test-pvc.yml


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
volumeName: test-pv
storageClassName: local-storage
volumeMode: Filesystem
accessModes:
  - ReadWriteOnce
resources:
  requests:
    storage: 1Gi

$ kubectl apply -f test-pvc.yml


$ kubectl apply -f test-pvc.yml
persistentvolumeclaim/test-pvc created

Here is the second test-pvc.yml


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-2
spec:
volumeName: test-pv
storageClassName: local-storage
volumeMode: Filesystem
accessModes:
  - ReadWriteOnce
resources:
  requests:
    storage: 1Gi

$ kubectl apply -f test-pvc-2.yml


persistentvolumeclaim/test-pvc-2 created

After applying the second PBV, verify the status of it:

$ kubectl describe pvc test-pvc-2


Name:      	test-pvc-2
Namespace: 	default
StorageClass:  local-storage
Status:    	Pending
Volume:    	test-pv
Labels:    	<none>
Annotations:   <none>
Finalizers:	[kubernetes.io/pvc-protection]
Capacity:  	0
Access Modes:
VolumeMode:	Filesystem
Used By:   	<none>
Events:
  Type 	Reason     	Age            	From                     	Message
  ---- 	------     	----           	----                     	-------
  Warning  FailedBinding  14s (x2 over 14s)  persistentvolume-controller  volume "test-pv" already bound to a different claim.

As you can see the problem is that you are not able to bound the second PVC (Persistent Volume claim) with the Persistent Volume (PV), i.e. test-pv because it has already been taken by the first test-pvc.

How to fix this?

Well, you should first start by deleting the PVC (persistent volume claim) where you faced the issue. In the above example, we faced the issue in our second PVC .i.e. Test-pvc-2.

$ kubectl delete pvc test-pvc-2


persistentvolumeclaim "test-pvc-2" deleted

After deleting:

  1. Create a new persistent volume
  2. Map the newly created persistent volume to test-pvc-2 
  3. Re-apply the test-pvc-2 configuration again

Learn from Nana, AWS Hero & CNCF Ambassador, how to enforce K8s best practices with Datree

Watch Now

🍿 Techworld with Nana: How to enforce Kubernetes best practices and prevent misconfigurations from reaching production. Watch now.

Headingajsdajk jkahskjafhkasj khfsakjhf

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Reveal misconfigurations within minutes

3 Quick Steps to Get Started