I wrote a download library for my colleague. It writes downloaded data to files.
My colleagues found that the file stays small for a long time, even if 100 Mb data have been downloaded.
So they suggest that I should call flush()
after every write()
so it will not take up memory to buffer these data.
But I don't think 100 Mb of virtual memory is a lot and think maybe windows has its reason to buffer so much data.
What do you think about it?
I would trust the operating system to tune itself appropriately, personally.
As for "flush immediately so as not to lose data if power dies" - if the power dies half way through a file, would you trust that the data you'd written was okay and resume the download from there? If so, maybe it's worth flushing early - but I'd weigh the complexity of resuming against the relative rarity of power failures, and just close the file when I'd read everything. If you see a half written file, delete it and download it again from scratch.