Not able to completely remove Kubernetes CustomResource

99.2k views Asked by At

I'm having trouble deleting custom resource definition. I'm trying to upgrade kubeless from v1.0.0-alpha.7 to v1.0.0-alpha.8.

I tried to remove all the created custom resources by doing

$ kubectl delete -f kubeless-v1.0.0-alpha.7.yaml
deployment "kubeless-controller-manager" deleted
serviceaccount "controller-acct" deleted
clusterrole "kubeless-controller-deployer" deleted
clusterrolebinding "kubeless-controller-deployer" deleted
customresourcedefinition "functions.kubeless.io" deleted
customresourcedefinition "httptriggers.kubeless.io" deleted
customresourcedefinition "cronjobtriggers.kubeless.io" deleted
configmap "kubeless-config" deleted

But when I try,

$ kubectl get customresourcedefinition
NAME                    AGE
functions.kubeless.io   21d

And because of this when I next try to upgrade by doing, I see,

$ kubectl create -f kubeless-v1.0.0-alpha.8.yaml
Error from server (AlreadyExists): error when creating "kubeless-v1.0.0-alpha.8.yaml": object is being deleted: customresourcedefinitions.apiextensions.k8s.io "functions.kubeless.io" already exists

I think because of this mismatch in the function definition , the hello world example is failing.

$ kubeless function deploy hellopy --runtime python2.7 --from-file test.py --handler test.hello
INFO[0000] Deploying function...
FATA[0000] Failed to deploy hellopy. Received:
the server does not allow this method on the requested resource (post functions.kubeless.io)

Finally, here is the output of,

$ kubectl describe customresourcedefinitions.apiextensions.k8s.io
Name:         functions.kubeless.io
Namespace:
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apiextensions.k8s.io/v1beta1","description":"Kubernetes Native Serverless Framework","kind":"CustomResourceDefinition","metadata":{"anno...
API Version:  apiextensions.k8s.io/v1beta1
Kind:         CustomResourceDefinition
Metadata:
  Creation Timestamp:             2018-08-02T17:22:07Z
  Deletion Grace Period Seconds:  0
  Deletion Timestamp:             2018-08-24T17:15:39Z
  Finalizers:
    customresourcecleanup.apiextensions.k8s.io
  Generation:        1
  Resource Version:  99792247
  Self Link:         /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/functions.kubeless.io
  UID:               951713a6-9678-11e8-bd68-0a34b6111990
Spec:
  Group:  kubeless.io
  Names:
    Kind:       Function
    List Kind:  FunctionList
    Plural:     functions
    Singular:   function
  Scope:        Namespaced
  Version:      v1beta1
Status:
  Accepted Names:
    Kind:       Function
    List Kind:  FunctionList
    Plural:     functions
    Singular:   function
  Conditions:
    Last Transition Time:  2018-08-02T17:22:07Z
    Message:               no conflicts found
    Reason:                NoConflicts
    Status:                True
    Type:                  NamesAccepted
    Last Transition Time:  2018-08-02T17:22:07Z
    Message:               the initial names have been accepted
    Reason:                InitialNamesAccepted
    Status:                True
    Type:                  Established
    Last Transition Time:  2018-08-23T13:29:45Z
    Message:               CustomResource deletion is in progress
    Reason:                InstanceDeletionInProgress
    Status:                True
    Type:                  Terminating
Events:                    <none>
5

There are 5 answers

2
smk On BEST ANSWER

So it turns out , the root cause was that Custom resources with finalizers can "deadlock". The CustomResource "functions.kubeless.io" had a

Finalizers:
    customresourcecleanup.apiextensions.k8s.io

and this is can leave it in a bad state when deleting.

https://github.com/kubernetes/kubernetes/issues/60538

I followed the steps mentioned in this workaround and it now gets deleted.

1
Yoker On

I had to get rid of a few other things

kubectl get mutatingwebhookconfiguration | ack consul | awk '{print $1}' | xargs -I {} kubectl delete mutatingwebhookconfiguration {}

kubectl get clusterrolebinding | ack consul | awk '{print $1}' | xargs -I {} kubectl delete clusterrolebinding {}

kubectl get clusterrolebinding | ack consul | awk '{print $1}' | xargs -I {} kubectl delete clusterrole {}

0
Aviel Yosef On

Try:

oc patch some.crd/crd_name -p '{"metadata":{"finalizers":[]}}' --type=merge

Solved my problem after trying to force delete got stuck.

3
Clare Chu On
$ kubectl get crd

NAME                                                            CREATED AT
accesscontrolpolicies.networking.zephyr.solo.io                 2020-04-22T12:58:39Z
istiooperators.install.istio.io                                 2020-04-22T13:49:20Z
kubernetesclusters.discovery.zephyr.solo.io                     2020-04-22T12:58:39Z
meshes.discovery.zephyr.solo.io                                 2020-04-22T12:58:39Z
meshservices.discovery.zephyr.solo.io                           2020-04-22T12:58:39Z
meshworkloads.discovery.zephyr.solo.io                          2020-04-22T12:58:39Z
trafficpolicies.networking.zephyr.solo.io                       2020-04-22T12:58:39Z
virtualmeshcertificatesigningrequests.security.zephyr.solo.io   2020-04-22T12:58:39Z
virtualmeshes.networking.zephyr.solo.io                         2020-04-22T12:58:39Z
$ kubectl delete crd istiooperators.install.istio.io

delete error

$ kubectl patch crd/istiooperators.install.istio.io -p '{"metadata":{"finalizers":[]}}' --type=merge
success delete crd istiooperators.install.istio.io

result

NAME                                                            CREATED AT
accesscontrolpolicies.networking.zephyr.solo.io                 2020-04-22T12:58:39Z
kubernetesclusters.discovery.zephyr.solo.io                     2020-04-22T12:58:39Z
meshes.discovery.zephyr.solo.io                                 2020-04-22T12:58:39Z
meshservices.discovery.zephyr.solo.io                           2020-04-22T12:58:39Z
meshworkloads.discovery.zephyr.solo.io                          2020-04-22T12:58:39Z
trafficpolicies.networking.zephyr.solo.io                       2020-04-22T12:58:39Z
virtualmeshcertificatesigningrequests.security.zephyr.solo.io   2020-04-22T12:58:39Z
virtualmeshes.networking.zephyr.solo.io                         2020-04-22T12:58:39Z
1
Ivan Aracki On

In my case, it was an issue that I have deleted a custom resource object, but not a custom resource definition (CRD).

I fixed it with: kubectl delete -f resourcedefinition.yaml. In that file I defined my CRDs.

So I think it's the best practice not to delete custom objects manually, but by deleting file where you define both object and CRD. Reference.