This code works great for normal uploads, but I am wondering how the resumable upload works when, for example, the user loses connection in the middle of a large upload. Is setting resumable to true in the writeStream options the only thing necessary for this to work?
Ive read through this link on performing resumable uploads, but it seems, that the createWriteStream function should encapsulate this behavior.
I have tried testing this out by turning off my wifi in the middle of an upload, but the time it takes to finish uploading once I resume upload is the same as an uninterrupted upload which is why i'm not sure if this is actually working.
Any help or explanation is appreciated, let me know if I can clarify anything.
stream = remoteFile.createWriteStream({gzip: true, resumable: true, metadata: {contentType: file.mimetype} });
stream.on('error', (err: any) => {
next(err);
res.status(400).send('err');
});
stream.on('finish', () => {
res.status(200).send('Success!');
});
stream.end(file.buffer);
I didn't trust the resumable metadata option as the only way to make the upload resumable so I followed the instructions here. I have attached my code below. This is for a single request resumable upload, NOT multiple chunk.
First I send a POST request to the bucket upload url to receive the resumable session uri
Then I send a PUT request to the received session uri with my file data