Limiting the S3 PUT file size using pre-signed URLs

1.4k views Asked by At

I am generating S3 pre signed URLs so that the client(mobile app) can PUT an image directly to S3 instead of going through a service. For my use case the expiry time of the pre signed URL needs to be configured for a longer window (10-20 mins). Therefore, I want to limit the size of file upload to S3 so that any malicious attacker can not upload large files to the S3 bucket. The client will get the URL from a service which has access to the S3 bucket. I am using AWS Java SDK.

I found that this can be done using POST forms for browser uploads but how can I do this using just signed S3 URL PUT?

1

There are 1 answers

2
ChangNoi On

I was using S3-Signed-URLS the first time and was also concerned about this. And I think this whole signed Urls stuff is a bit of a pain because you cant put a maximum Object/Upload size limit on them.

I think thats something very important on file-uploads in general, that is just missing..

By not having this option you are forced to handle that problem with the expiry time etc. This gets really messy..

But it seems that you can use S3 Buckets also with normal Post-Requests, what has a content-length parameter in their policy. So I'll probably exchange my Signed-URLS with POST-Routes in the future.

I think for proper, larger applications this is the way to go.(?)

What might help with your issue:

In the JavaScript SDK there is a method / function that gets you only the meta-data of the an S3-Object (including File Size) without downloading the whole file.

It's called s3.headObject()

I think, after the upload is done, it takes some time for AWS to process that newly uploaded file and then is available in your bucket.

What I did was, I set a timer after each upload to check the file-size and if its bigger 1mb, it will delete the file. I think for production you wanna log that somewhere in a DB. My FileNames also include the user-id of who uploaded the file. That way, you can block an account after a too big upload if you wanted.

This here worked for me in javascript..

function checkS3(key) {
  return new Promise((resolve, reject) => {
    s3.headObject(headParams, (err, metadata) => {
      console.log("setTimeout upload Url");
      if (err && ["NotFound", "Forbidden"].indexOf(err.code) > -1) {
        // console.log(err);
        return reject(err);
        //return resolve();
      } else if (err) {
        const e = Object.assign({}, Errors.SOMETHING_WRONG, { err });
        // console.log(e);
        // console.log(err);
        return reject(e);
      }
      return resolve(metadata);
    });
  });
}