Objects uploaded to a bucket has different storage class than the default storage class.
I have a Synology NAS. I am using the Hyper Backup application to create an offsite backup of my NAS data. One of the option is to use an S3 or S3 compatible bucket to store the data. I am using Google Clouds Archive storage class. I created a bucket with a default class of Archive and set up the backup task. The backup task has started and will take some time to finish.
Now when I look at any object inside the bucket, the storage class is shown as Standard. Below is an example screenshot. I am not really clear why are the objects being uploaded as Standard. The pricing of Standard object is more than 16 times than a Archive object, so the whole task is losing its value pretty fast.
Bucket class as shown in Google Cloud console
Any idea on why is it happening? And how I can configure it so that the objects are uploaded to Archive storage class?
When you upload an object to a bucket, it usually takes on the bucket's default storage class. However, some services or applications can specify a different storage class during upload. In these cases, the uploaded object will take on the storage class specified during the upload, not the default storage class of the bucket.
To ensure that your objects are uploaded to the Archive storage class, you can do one of two things:
Configure the upload application. In your case, the Hyper Backup application on the Synology NAS is uploading the data. You may need to configure it to specify the Archive storage class during the upload if it supports this functionality.
Use Object Lifecycle management rules. You can create an Object Lifecycle management rule on the bucket that changes the storage class of objects to Archive after they've been uploaded. For example, you can set a rule to change the storage class of any new objects to Archive one day after creation.
Here's how you can set up a lifecycle rule: