I've created a metric filter just to get an idea about how many times a specific log pattern shows up, nothing crazy. Metric value is set to 1 and Default value is set to 0. Since it's not a high-resolution metric, CloudWatch is aggregating it in a minute period. All good with that.
What I do not understand is the difference between the Sum and Sample Count statistics. Why Sum and Sample Count would have different values?
- If we assume that there was no record in the 1-minute interval with the filter pattern,
Sumwould be 0, andSample Countwould be 0. - If we assume that there was at least one record in the 1-minute interval with the filter pattern,
Sumwould be X, andSample Countwould be X, where X is greater than 0.
An example:
Let's say I created a metric filter with the pattern "ERROR:", and I set Metric value is set to 1 and Default value is set to 0.
We have the following logs for three different log streams under the same log group in a specific minute in the timeline:
Log stream 1:
- ERROR: XXXXXXX
- INFO: XXXXXX
Log stream 2:
- INFO: XXXXXX
- INFO: XXXXXX
Log stream 3:
- ERROR: XXXXXXX
- ERROR: XXXXXXX
- ERROR: XXXXXXX
What would be the values for Sum and Sample Count in your opinion? 4, right!?
I'm expecting some clarity about the usage of the Default value