Logback - compressing log lines before writing to file

464 views Asked by At

I am not sure if it is possible to basically gzip the stream of log lines as they arrive at the logback appender, rather than compressing the file when we log-rotate. Is that at all possible, and if so, how to achieve that and is there a lot of benefit of compressing "on the fly" rather than the whole file?

1

There are 1 answers

2
Mark Adler On

Sure. You can simply keep a gzip compression process open and feed it lines as they come in. That would significantly reduce the space required by the log file, and would not take any more CPU resources on average, since you were going to eventually compress it anyways.

The downside is that at any point in time the compressed log file will not yet contain many of the already supplied log lines, since there is a latency and burstiness to the compression process. Many lines will need to be accumulated before a compressed block is emitted. Second, the compressed file will not be a valid gzip file until it is closed. You would still be able to decompress what's there, but it will not have the trailer with a check value. If the process is killed or the machine crashes, you are left with an invalid gzip file that doesn't have the most recent several log lines. Of course, the most recent log lines may be exactly the ones that you're most interested in, to find out what the heck happened.

All of those downsides can be cured with a specialized approach for this application, which is implemented in gzlog.h/gzlog.c. gzlog assures that after each line is written, the gzipped log file is complete and valid, and contains that log line. Furthermore, it can reconstruct the gzip file with the last provided log line even if the gzlog process itself is interrupted in the middle of adding a log line.