Absent AWS volume but "bound" PVC in kubernetes

1.4k views Asked by At

See the output below. What confuses me is that the status is bound yet the volume does not exist in AWS. I am using Kubernetes 1.17

I also checked that no POD is using this PVC (used https://github.com/yashbhutwala/kubectl-df-pv additionally to describe commands below)

Any ideas - how this could happen? If the volume is manually deleted via AWS CLI (or GUI web UI) - does that mean Kubernetes is not handling this situation correctly?

 k get pvc -n metrics   
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
grafana-persistent-storage             Bound    pvc-1395291c-d89b-11e9-8a64-0a4976158cfe   1Gi        RWO            gp2            398d

➜ k describe pv pvc-1395291c-d89b-11e9-8a64-0a4976158cfe           
Name:              pvc-1395291c-d89b-11e9-8a64-0a4976158cfe
Labels:            failure-domain.beta.kubernetes.io/region=eu-central-1
                   failure-domain.beta.kubernetes.io/zone=eu-central-1c
Annotations:       kubernetes.io/createdby: aws-ebs-dynamic-provisioner
                   pv.kubernetes.io/bound-by-controller: yes
                   pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      gp2
Status:            Bound
Claim:             metrics/grafana-persistent-storage
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          1Gi
Node Affinity:     
  Required Terms:  
    Term 0:        failure-domain.beta.kubernetes.io/zone in [eu-central-1c]
                   failure-domain.beta.kubernetes.io/region in [eu-central-1]
Message:           
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   aws://eu-central-1c/vol-0b92b7db07b87b3e8
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>

➜ aws ec2 describe-volumes --volume-ids vol-0b92b7db07b87b3e8

An error occurred (InvalidVolume.NotFound) when calling the DescribeVolumes operation: The volume 'vol-0b92b7db07b87b3e8' does not exist.

➜ env | grep AWS              
AWS_ACCESS_KEY_ID=xxx
AWS_SECRET_ACCESS_KEY=yyy
AWS_DEFAULT_REGION=eu-central-1
AWS_DEFAULT_OUTPUT=table

➜ kubectl version  
'Tipz:' k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
2

There are 2 answers

0
Fritz Duchardt On BEST ANSWER

It is quite possible to delete volumes from the AWS console, while they are referenced by a K8s Persistent Volume! However, only if no Pod is mounting them.

If a Pod is mounting the PVC of the PV in question, deletion from the AWS console is not possible, since the storage is in use (attached).

So, in other words, the mere existence of PVCs and PVs on deleted storage, does not lead to K8s failing those resources.

1
Arghya Sadhu On

You should use Amazon EBS CSI driver instead of in-tree Amazon EBS storage provisioner. Use dynamic provisioning by creating a storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer