Kubernetes pod crashLoopBackOff, need to remove a pod

3k views Asked by At

I have installed Prometheus using helm chart, so I got 4 deployment files listed:

  • prometheus-alertmanager
  • prometheus-server
  • prometheus-pushgateway
  • prometheus-kube-state-metrics

All pods of deployment files are running accordingly. By mistake I restarted one deployment file using this command:

kubectl rollout restart deployment prometheus-alertmanager

Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?

Screenshot of kubectl output

2

There are 2 answers

0
Wytrzymały Wiktor On

You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:

kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`

Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.

0
islamhamdi On

These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.

A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.