I've looked over this question which didn't help so much, so here it goes...
I have a bunch of Lambda functions that I want to monitor and set off a CloudWatch alarm if something goes wrong. The Lambda functions are actually prefixed with environment names, i.e. env-1-function-1
, env-1-function-2
, env-2-function-1
etc.
These environments are separate, i.e. a cloudwatch alarm setup for env1 shouldn't have anything to do with env2. So to achieve this, I started to look at SEARCH expressions.
This is my alarm:
resource "aws_cloudwatch_metric_alarm" "lambda_average_duration" {
alarm_name = "${local.env_prefix}-alarm-lambda_average_duration"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "1"
threshold = "40000"
alarm_description = "This alarm monitors lambda average duration and triggers if the average of durations rise above 40 seconds."
alarm_actions = [aws_sns_topic.alarms_topic.arn]
metric_query {
id = "e1"
expression = "SEARCH('{AWS/Lambda,FunctionName} MetricName=\"Duration\" FunctionName=${local.env_prefix}', 'Maximum', 60000)"
label = "Function Name filter"
return_data = true
}
}
Where env_prefix
will be env-1
. This works totally fine in the AWS Console when graphing metrics.
Now when I run Terraform, it seems to have an issue with this saying that "Updating metric alarm failed: ValidationError: Period must not be null", however according to the Terraform documentation on this, when supplying metric_query
you may not specify period...
Is there a concrete way for me to limit my Lambda metrics to be filtered per environment (name filter), instead of using the Lambda functions across the whole account?
This happens because AWS Cloudwatch does not support alarms on SEARCH metrics.