I'm introducing a replacement to the natively written logging framework of my application. The existing logging is written in a fashion to generate the files such that the file being currently written to is named "logs.txt" and the rolled over files are named as "Logs.N.txt", where "Logs.1.txt" is the latest after "logs.txt". How do I achieve the same behaviour through Boost V2 logging?

Trying to use the Boost logging because it provides good support for the multiple sinks, as I now have to target my logs to 3 locations: a) local log file, b) stack driver on cloud, and c) syslog server hosted as a separate container

The reason I want the current file to be "logs.txt" is that among other things, it allows one to just tail -F logs.txt on a running system.

I found a snippet that rotates the logs and maintains the size limitations for both per-file & total-logs.

auto strm = boost::log::add_file_log(
        boost::log::keywords::file_name = "Logs.%2N.txt",
        boost::log::keywords::open_mode = std::ios_base::app,
        boost::log::keywords::rotation_size = 5 * 1024, // Max filesize
        boost::log::keywords::auto_flush = true
    );

auto bkend = strm->locked_backend();                                                       
bkend->set_file_collector(boost::log::sinks::file::make_collector(                      
            boost::log::keywords::target = "./", // log file destination
            boost::log::keywords::max_size = 100 * 1024, //Max total size
            boost::log::keywords::min_free_space = 100000
            ));

bkend->scan_for_files(boost::log::sinks::file::scan_method::scan_matching, true);       

Behaviour

Current file generation pattern is:

Logs.01.txt     <--- Oldest file
Logs.02.txt
.
.
.
Logs.19.txt
Logs.20.txt     <--- File being written to

which as the logging continues will become

Logs.41.txt     <--- Oldest file
Logs.42.txt
.
.
.
Logs.59.txt
Logs.60.txt     <--- File being written to

The index just keeps rolling on (so it's beyond the desired 2 digit index)

Logs.131.txt     <--- Oldest file
Logs.132.txt
.
.
.
Logs.149.txt
Logs.150.txt     <--- File being written to

Required file generation pattern is:

logs.txt        <--- File being written to
Logs.01.txt     <--- Latest rolled over file
Logs.02.txt
.
.
.
Logs.12.txt
Logs.13.txt     <--- Oldest file

grows to

logs.txt        <--- File being written to
Logs.01.txt     <--- Latest rolled over file
Logs.02.txt
.
.
.
Logs.19.txt
Logs.20.txt     <--- Oldest file

& as Logs.20.txt is at the limit of the total space, it keeps overwriting the Logs.20.txt file with Logs.19.txt and so on for each rollover.

So the oldest file keeps getting renamed to the next index, till it reaches the Max total log space limit and is then just overwritten.

Questions

  1. Is there a configuration for file logging backend that can support it?
  2. If not, how can I customise the backend for this?
  3. Also, please point me to any documentation/tutorial (other than Boost.Log documentation) to Boost logging that speaks to the library structure and class level interactions, if aware.

1 Answers

0
Andrey Semashev On Best Solutions

Is there a configuration for file logging backend that can support it?

No, Boost.Log does not support this. The primary reason is that keeping the most recent log file have the counter value of 0 requires N file renames on each rotation, where N is the number of previously rotated files. Aside from performance implications, this increases the chance of failure during filesystem operations (e.g. in case if a process opens one of the files during rotation, which will cause a rename error on Windows).

If not, how can I customise the backend for this?

You don't need to customize the sink backend, but you will have to write a custom file collector. You have to implement the collector interface, most importantly the store_file method, which should perform all filesystem activity, including renaming files and removing old files. This method will be called when the sink backend rotates the log file. You can set your file collector by calling set_file_collector on the sink backend.