Currently my setup is that there is a subscription filter in CloudWatch connected to OpenSearch. This works well in getting my logs to OpenSearch. However I have to delete my logs after 2 weeks because of lack of space.
I now want to implement cold storage instead of deleting the logs. A problem I encounter is that I am using a t3.small.search instance, which does not support OpenSearch cold storage.
If your domain uses a T2 or T3 instance type for your data nodes, you can't use cold storage.
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/cold-storage.html.
I don't want to upgrade my instance either, as it seems that you have to use ultra warm storage in order to use cold storage, and that is quite expensive because of the required ultrawarm instance type https://aws.amazon.com/opensearch-service/pricing/.
I was wondering if anyone have any good ideas on how I can retain my logs for longer without it becoming hugely expensive?
One idea I thought about was to use Kinesis Data Fire Hose to get the logs from CloudWatch to S3. Then I can use a lifecycle policy on the S3 bucket. Then I can use the S3 as the data source for OpenSearch, instead of having the subscription filter in CloudWatch https://docs.aws.amazon.com/opensearch-service/latest/developerguide/integrations.html#integrations-s3-lambda. Has anyone done something similar?
Do you have any other suggestions on how I should go about this?