Akka.Net Sending huge messages (maximum-frame-size)

3.4k views Asked by At

I have a question regarding increasing maximum-frame-size & send/receive-buffer-size values. Is there a limit on how high they can go?

I am passing a large chunk of data into the system (say 20mb) which is then used to compute some results and return back. Setting above parameters at 100mb results in messages being dropped. The largest chunk that I could pass before this happens is about 5mb. I have tried to increase timeouts for connection and ack but it doesn't seem to make a difference.

Also, if the message is being dropped, is there any way to get notified about it? Sometimes it sends a Dissassociated error and sometimes it just sits doing nothing. log-frame-size-exceeding = on and log-buffer-size-exceeding = 50000 settings do not seem to have an effect.

Any help is appreciated. Thank you.

2

There are 2 answers

0
Bartosz Sypytkowski On

In general it's a bad idea to push big portions of data over the wire at once. It's a lot better to split them into smaller parts and send one by one (this also makes a retry policy less expensive, if it's necessary). If you want to keep your actor logic unaware of transport details, you may abstract it by defining a specialized pair of actors, whose only job will be to split/join big messages.

Also as Aaron - creator of Helios (socket server used by Akka.NET) - mentioned you should not use to big messages since they stretch server's buffer pool size, but once it's stretched, it won't be reduced again.

1
AndrewS On

You should cut up your messages into much smaller pieces and reconstitute the object on the receiving end. It will make your retries much easier, and also not "hog" the socket (e.g. if you're sending 100mb through a socket, you're tying it up so heartbeats can't get through from remote systems).

I wrote an in-depth post about what goes on w/ large messages and sockets in Akka.NET that you may find useful. But the short answer is cut up your messages into small pieces and rebuild it on the receiving end, or better yet, process them in a streaming fashion.