I'm using a std::map
to store about 20 million entries. If they were stored without any container overhead, it would take approximately 650MB of memory. However, since they are stored using std::map
, it uses up about 15GB of memory (i.e. too much).
The reason I am using an std::map
is because I need to find keys that are equal to/larger/smaller than x
. This is why something like sparsehash
wouldn't work (since, using that, I cannot find keys by comparison).
Is there an alternative to using std::map
(or ordered maps in general) that would result in less memory usage?
EDIT: Writing performance is much more important than reading performance. It will probably only read ~10 entries, but I don't know which entries it will read.
Turns out the issue wasn't
std::map
.I realized was using 3 separate maps to represent various parts of the same data, and after slimming it down to 1, the difference in memory was entirely negligible.
Looking at the code a little more, I realized code I had written to free a really expensive struct (per element of the map) didn't actually work.
Fixing that part, it now uses <1GB of memory, as it should! :)
TL;DR:
std::map
's overhead is entirely negligible for this. The issue was my own.