We can avoid many calls to a storage allocator by keeping a collection of available nodes in his own structure.
This idea can be applied to Binary search tree data structure.
The author said that :"Allocating the nodes all at once can greatly reduces the tree's space requirements, which reduces the run time by about a third."
I'm curious how this trick can reduce space requirements. I mean If we want to build a binary search tree with four nodes, we need to allocate memory for these four nodes, no matter we allocate the nodes one by one or all at once.
Memory allocators are notoriously bad at allocating very small objects. The situation has somewhat improved in the last decade, but the trick from the book is still relevant.
Most allocators keep additional information with the block that they allocate to you, so that they could free the memory properly. For example, the
malloc
/free
pair of C ornew[]
/delete[]
pair of C++ needs to save the information about the length of the actual memory chunk somewhere; usually, this data ends up in the four bytes just prior to the address returned to you.This means that at least four additional bytes will be wasted for each allocation. If your tree node takes twelve bytes (four bytes for each of the two pointers plus four bytes for the number), sixteen bytes would be allocated for each node - a 33.3% increase.
Memory allocator needs to perform additional bookkeeping as well. Every time a chunk is taken from the heap, the allocator must account for it.
Finally, the more memory your tree uses, the less is the chance that the adjacent node would be fetched in the cache when the current node is processed, because of the distance in memory to the next node.