I have installed Prometheus using helm chart, so I got 4 deployment files listed:
- prometheus-alertmanager
- prometheus-server
- prometheus-pushgateway
- prometheus-kube-state-metrics
All pods of deployment files are running accordingly. By mistake I restarted one deployment file using this command:
kubectl rollout restart deployment prometheus-alertmanager
Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?
You can simply delete that pod with the
kubectl delete pod <pod_name>
command or attempt to delete all pod incrashLoopBackOff
status with:Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.