Measuring client back-up when using Boost.Beast WebSocket

284 views Asked by At

I am reading from a Boost.Beast WebSocket. When my application gets backed up, the websocket sender appears happy to delay/buffer the data on their end (presumably at the application level, as they will delay by 1 minute or more).

What is the best way to measure if I am getting backed up? For example, can I look at the size of a TCP buffer? I could also read all the data into memory in a fast thread, and put it in a queue for the slow thread (in which case, backup can be measured by the size of the queue). But I'm wondering if there's a more direct way.

1

There are 1 answers

0
selbie On

This varies by platform, but there's the SO_RCVBUF option that sets the amount of data that can be queued onto the socket before TCP pauses receiving more data.

If you have access to the socket, s, invoke this to inspect how much data its rcv buffer size can hold

    net::socket_base::receive_buffer_size opt = {};
    s.get_option(opt);

You'll probably see that it defaults to something like 64K or so.

Then crank it up real high to like a megabyte:

    net::socket_base::receive_buffer_size optSet(1000000);
    boost::system::error_code ec;
    s.set_option(optSet, ec);

YMMV on how large of a value you can pass to the set_option call and how much actually helps.

Keep in mind, this is only a temporary measure to relieve the pressure. If you keep getting backed up, you'll only hit the limit again, just a bit later and perhaps less often.

I could also read all the data into memory in a fast thread, and put it in a queue for the slow thread

Yes, but you've basically implemented exactly what SO_RCVFROM does. Either that, or you buffer to infinity with respect to memory cost (no limit).