I like to monitor the containers using Prometheus and cAdvisor so that when a container restart, I get an alert. I wonder if anyone have sample Prometheus alert for this.
How can I alert for container restarted?
54.6k views Asked by qingsong AtThere are 4 answers
The following PromQL query returns containers, which were restarted during the last 10 minutes. It also shows the number of restarts during the last 10 minutes per each returned container:
(sum(increase(kube_pod_container_status_restarts_total[10m])) by (container)) > 0
The lookbehind window in square brackets (10m
in the query above) can be tuned for a particular needs. See these docs for possible values the lookbehind window accepts.
The query works in the following way:
- The
kube_pod_container_status_restarts_total
metric is exposed by kube-state-metrics, which is included by default in Kubernetes. See these docs for the exposed pod-level metrics. - The inner
increase(kube_pod_container_status_restarts_total[10m])
calculates the number of container restarts during the last 10 minutes. See docs for increase() function. - The outer
sum(...) by (container)
is used solely for removing all the labels except thecontainer
label from the result. See docs for sum(). - Then the result is compared to zero with
> 0
. This filters out containers with zero restarts during the last 10 minutes. See docs for comparison operators.
If you are running in Kubernetes you can deploy the kube-state-metrics
container that publishes the restart metric for pods: https://github.com/kubernetes/kube-state-metrics
I used the following Prometheus alert rule for finding container restarts in an hour(can be modified to max time), It may be helpful for you.
Prometheus Alert Rule Sample
ALERT ContainerRestart/PodRestart
IF rate(kube_pod_container_status_restarts[1h]) * 3600 > 1
FOR 5s
LABELS {action_required = "true", severity="critical/warning/info"}
ANNOTATIONS {DESCRIPTION="Pod {{$labels.namespace}}/{{$labels.pod}} restarting more than once during last one hours.",
SUMMARY="Container {{ $labels.container }} in Pod {{$labels.namespace}}/{{$labels.pod}} restarting more than once times during last one hours."}
rate()
rate(v range-vector) calculates the per-second average rate of increase of the time series in the range vector. Breaks in monotonicity (such as counter resets due to target restarts) are automatically adjusted for. Also, the calculation extrapolates to the ends of the time range, allowing for missed scrapes or imperfect alignment of scrape cycles with the range's time period. The following example expression returns the per-second rate of HTTP requests as measured over the last 5 minutes, per time series in the range vector:
rate(http_requests_total{job="api-server"}[5m])
rate should only be used with counters. It is best suited for alerting, and for graphing of slow-moving counters.
Note that when combining rate() with an aggregation operator (e.g. sum()) or a function aggregating over time (any function ending in _over_time), always take a rate() first, then aggregate. Otherwise rate() cannot detect counter resets when your target restarts.
kube_pod_container_status_restarts_total
Metric Type: Counter
Labels/Tags: container=container-name, namespace=pod-namespace,pod=pod-name
Description: The number of container restarts per pod
I use Compose and Swarm deployments, so Kubernetes answers are not an option. So I came to this rules.
Basically, both works the same way. There are multiple records for each service but with different labels.
container_label_restartcount
labelname
label is changed,container_label_com_docker_swarm_service_name
acts as service name.So the idea is just to count unique records for each instance and name. I personally think that sending alert for each restart is wrong and not useful. I chose to alert if there are more than
5
restarts over15m
period. In my rules I usedcontainer_last_seen
metric randomly, it actually doesn't matter, because counting is done by difference in labels. We just need a persistent metric. Also, note the- 1
at the end of the expression. We have to substruct1
, because we are counting unique records, so there are always at least one, if your container is running.You may need to adapt this example for swarm services with multiple replicas, but you got the idea how to count unique labels.