Chronicle Queue StoreTailer.next() creating huge amount of garbage

250 views Asked by At

I'm bench-marking Chronicle queue for one of our use cases and noticed that readDocument() API of the ExcerptTailer creates too much garbage! JFR shows that the process spends around 66% of the time in the stack below.

What version of Chronicle Queue am I using ?

net.openhft:chronicle-queue:4.5.9

How am I creating the queue ?

queue = SingleChronicleQueueBuilder.binary(filename).build();
appender = queue.acquireAppender();
tailer = queue.createTailer();

//Snippet to read
tailer.readDocument(r -> {
            //Reading some context here

        });

How much garbage is created ?

Around 11 GB in 3 minutes

Stacktrace

Stack Trace TLABs   Total TLAB Size(bytes)  Pressure(%)
byte[] java.lang.StringCoding$StringEncoder.encode(char[], int, int)    167 6,593,171,600 52.656
   byte[] java.lang.StringCoding.encode(String, char[], int, int)   167 6,593,171,600   52.656
      byte[] java.lang.String.getBytes(String)  167 6,593,171,600   52.656
         String[] java.io.UnixFileSystem.list(File) 167 6,593,171,600   52.656
            String[] java.io.File.list()    167 6,593,171,600   52.656
               String[] net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.getList()  167 6,593,171,600   52.656
                  void net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.setFirstAndLastCycle()  167 6,593,171,600   52.656
                     int net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.firstCycle()  167 6,593,171,600   52.656
                        long net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.firstIndex()  167 6,593,171,600   52.656
                           boolean net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.next(boolean)   167 6,593,171,600   52.656

What else did I try ?

I used JitWatch and increased the byte code size for escape analysis from 150 bytes to 516 bytes. I noticed that the readDocument method is JIT compiled.

Any suggestions on the next step ?

1

There are 1 answers

3
Peter Lawrey On BEST ANSWER

This only happens when busy polling and there is no messages. You could add a dummy message to prevent this. A workaround was added to later versions.