Can chronicle-map handle data larger than memory?

895 views Asked by At

I'm a bit confused by how off heap memory works. I have a server that has 32GB ram, and a data set of key-value mappings about 1TB in size. I'm looking for a simple and fast embedded Java database that would allow me to map a key to a value according to this 1TB dataset, which will mostly have to be read from disk. Each entry in this data set is small (<500 bytes), so I think using a file system would be ineffecient.

I'd like to use Chronicle Map for this. I read that off heap memory usage can exceed ram size and that it interacts with the filesysytem somehow, but at the same time, Chronicle Map is described as an in memory database. Can Chronicle Map handle the 1TB data set for my server, or am I only limited to using data sets 32GB or less?

1

There are 1 answers

1
Peter Lawrey On BEST ANSWER

The answer is it depends on your operating system. On Windows a Chronicle Map must fit inside main memory, however on Linux and MacOSX it doesn't have fix in main memory (the difference is how memory mapping is implemented on these OSes) Note: Linux even allows you to map a region larger than your disk space (MacOSX and Windows doesn't)

So on Linux you could map 1 TB or even 100 TB on a machine with 32 GB of memory. It is important to remember that your access pattern and your choice of drive will be critical to performance. If you generally access the same data most of the time and you have SSD this will perform well. If you have spinning disk and a random access pattern, you will be limited by the speed of your drive.

Note: we have tested Chronicle Map to 2.5 billion entries and it performs well as it uses 64-bit hashing of keys.