Best way to move files of varying sizes across slow network using .NET

1.3k views Asked by At

I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'd like some feedback on the best method for achieving this. As I see it, there are a couple of options:

  • Serialize the entire file into my remoting object and transmit at all at once, regardless of size. This would probably be the fastest, but a failure during transmission requires that the whole file be re-transmitted, with no way to resume.
  • If the file size is larger than something small (like 4KB), break it into 4KB chunks and remote those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time.
  • Including something like an FTP or SFTP server with my application - the client will notify the server that it's starting using remoting, upload the file, then use remoting to notify of completion. I'd like to contain everything in my app instead of requiring a separate FTP service, but I'm open to this option if it's needed.
  • Use some kind of stated TCP connection or WPF or some other transmission method that's built to handle the failures or is capable of doing some kind of checkpoint/resume.
  • Any others I'm missing?

What's the most flexible/reliable transmission method? I'm not that concerned about speed, but more about reliability - I want the file to move, even if it's slowly. Since the client and server will be multi-threaded, I can transmit multiple files at the same time if the connection allows it.

Thanks for your feedback - I'll throw in a bounty to get some recommendations on ways people would accomplish this.

3

There are 3 answers

1
TFD On BEST ANSWER
1
Jakob Borg On

This is what TCP itself is made for, and tuned for during decades or hard testing. Remoting is made for small RPC calls, not large file transfers. You should simply use a TCP socket for transmitting the data, and let the lower layer protocols worry about latency, transmission windows, MTU, etc.

0
regex On

Although calmh does answer the question you're asking from the OSI layer 4 side of life, I feel more like you're looking more at the application tiers in your question. TCP definitely does handle everything ranging from latency, transmission windows, etc on the networking side of life. However, it does not directly determine what happens if a user ends a download session prematurely and then decides to pick it up later where they left off.

To answer your question from a different angle, I would definitely recommend chunking the file into sections and indexing them for all connections, regardless of the speed. They can then be re-assembled again on the client once the entire file is downloaded. This allows the user to pause download sessions and resume.

As far as determining the speed, there may be methods pre-built to do this, but one method you could use is just build your own speed test: Send 1 MB to the client (upload) and have it send a response once received. 1100 divided by the time it took to get the response back from client, is the KB/s it takes the client to download from the server. And vise versa to test upload from the client.

As far as transmitting, I would recommend utilizing existing technologies. SFTP supports authenticated encrypted data transfer. It is basically FTP, but over SSH. There should be APIs available somewhere for interacting with this.

On a side note, I have never done anything to the extent that you talk about, but hopefully my ideas at least give you a couple options to consider.