I am trying to understand how the size of persisted map files is calculated.
When creating a persisted map on disk via something like:
ChronicleMap
.of(Key.class, Value.class)
.name("foo")
.entries(1024)
.averageKeySize(32)
.averageValueSize(2048)
.maxBloatFactor(1)
.createOrRecoverPersistedTo("foo.dat")
I imagine the size of the pre-allocated "foo.dat" file is a function of key/value size, the number of entries and maxBloatFactor, and perhaps OS architecture and other factors.
So my question is: Given a set of configuration values, is it possible to know deterministically how much the "foo.dat" file size will end up being?
You can simply call the
VanillaChronicleMap#dataStoreSize()
method, it will return the file size.For details on how it works you can have a look at its implementation - it's opensource (albeit it's not a trivial computation).