Do separate S3 TransferUtility instances share a common maxConcurrentUploads limit?

20 views Asked by At

I have a high latency site uploading 800 MB sets of files to S3 using TransferUtility (.net) via a proxy as a component of a service provided by a Supplier. Upload time versus cycle time is marginal and backlogs are an issue when we encounter significant retransmissions on the Internet link (it's not uncommon for one or two of each 30 stream set to be disrupted).

The default config has maxConcurrentUploads set to 30 and we see that achieve ~120Mbps. We also tested with maxConcurrentUploads set at 100 and 10, neither changed the throughout signifcantly and it was noted that there were only 50 concurrent streams when maxConcurrentUploads was set to 100. It is not understood what is limiting maxConcurrentUploads to 50.

The system's coding seems to be blocking at a job/set level, instead of maintaining concurrent uploads at the configured value we observe the maxConcurrentUploads of connections established to the proxy all at once (1ms after each other) as each set triggers, with only streams that take >3x the standard duration occasionally seeming to overlap the next set. This - along with the network layer retransmissions - is a primary cause of the backlog building but we have no access to the code.

Circuit utilisation is relatively low, and the suggestion was made the parallel S3 uploads would address/mask the latency/packet loss/coding constraints, however the Supplier believes that running multiple instances of TransferUtility would be constrained by a common maxConcurrentUploads.

Has anyone encountered a limit of 50 for maxConcurrentUploads? Is it in TransferUtility, the Windows TCP stack or somewhere else?

Can anyone confirm whether separate instances of TransferUtility would have independent maxConcurrentUploads counters?

0

There are 0 answers