I am syncing a directory to AWS S3 from a Linux server for backup.
rsync -a --exclude 'cache' /path/live /path/backup
aws s3 sync path/backup s3://myBucket/backup --delete
However, I noticed that when I want to restore a backup like so:
aws s3 sync s3://myBucket/backup path/live/ --delete
The owner and file permissions are different. Is there anything I can do or change in the code to retain the original Linux information of the files?
Thanks!
I stumbled on this question while looking for something else and figured you (or someone) might like to know you can use other tools that can preserve original (Linux) ownership information. There must be others but I know that s3cmd can keep the ownership information (stored in the metadata of the object in the bucket) and restore it if you sync it back to a Linux box.
The syntax for syncing is as follows
And you can sync it back with the same command just reversing the from/to.
But, as you might know (if you did a little research on S3 costs optimisation), depending on the situation, it could be wiser to use a compressed file. It saves space and it should take less requests so you could end up with some savings at the end of the month.
Also, s3cmd is not the fastest tool to synchronise with S3 as it does not use multi-threading (and is not planning to) like other tools, so you might want to look for other tools that could preserve ownership and profits of multi-threading if that's still what you're looking for. To speedup data transfer with s3cmd, you could execute multiple s3cmd with different --exclude --include statements.
For example