There are multiple incoming TCP connections and you need to architect a system that will write to an outgoing TCP connection at a particular data rate. How do you programmatically achieve this. The ordering of the packets from different threads is unimportant, assumptions can be made on that.
Is this just a rate-limiting algorithm? Or are there TCP-related protocol characteristics that can be exploited?
Rejected answer: Using a queue and a timer to achieve this through the leaky-bucket approach (This was an interview question)