I am reading from a Boost.Beast WebSocket. When my application gets backed up, the websocket sender appears happy to delay/buffer the data on their end (presumably at the application level, as they will delay by 1 minute or more).
What is the best way to measure if I am getting backed up? For example, can I look at the size of a TCP buffer? I could also read all the data into memory in a fast thread, and put it in a queue for the slow thread (in which case, backup can be measured by the size of the queue). But I'm wondering if there's a more direct way.
This varies by platform, but there's the SO_RCVBUF option that sets the amount of data that can be queued onto the socket before TCP pauses receiving more data.
If you have access to the socket,
s
, invoke this to inspect how much data its rcv buffer size can holdYou'll probably see that it defaults to something like 64K or so.
Then crank it up real high to like a megabyte:
YMMV on how large of a value you can pass to the set_option call and how much actually helps.
Keep in mind, this is only a temporary measure to relieve the pressure. If you keep getting backed up, you'll only hit the limit again, just a bit later and perhaps less often.
Yes, but you've basically implemented exactly what SO_RCVFROM does. Either that, or you buffer to infinity with respect to memory cost (no limit).