what's wrong with custom allocator in C++?

229 views Asked by At

Bjarne Stroustrup in his book The C++ Programming language says that:

Advice: Think twice before writing your own allocator

What does Bjarne wants to say by giving above advice? Which are the problems that can arise if I write my own allocator? Is it really problematic? How should I overcome the problems?

1

There are 1 answers

0
EmeryBerger On BEST ANSWER

Together with two colleagues, Ben Zorn and Kathryn McKinley (both now at Microsoft Research), I wrote a paper about this (Reconsidering Custom Memory Allocation, OOPSLA 2002). It won a Most Influential Paper Award -- here's the citation.

Custom memory management is often used in systems software for the purpose of decreasing the cost of allocation and tightly controlling memory footprint of software. Until 2002, it was taken for granted that application-specific memory allocators were superior to general purpose libraries. Berger, Zorn and McKinley’s paper demonstrated through a rigorous empirical study that this assumption is not well-founded, and gave insights into the reasons why general purpose allocators can outperform handcrafted ones. The paper also stands out for the quality of its empirical methodology.

The original paper actually did somewhat more than the citation: here's the abstract from the paper. The Lea allocator referred to forms the basis of the Linux allocator.

Programmers hoping to achieve performance improvements often use custom memory allocators. This in-depth study examines eight applications that use custom allocators. Surprisingly, for six of these applications, a state-of-the-art general-purpose allocator (the Lea allocator) performs as well as or better than the custom allocators. The two exceptions use regions, which deliver higher performance (improvements of up to 44%). Regions also reduce programmer burden and eliminate a source of memory leaks. However, we show that the inability of programmers to free individual objects within regions can lead to a substantial increase in memory consumption. Worse, this limitation precludes the use of regions for common programming idioms, reducing their usefulness.

We present a generalization of general-purpose and region-based allocators that we call reaps. Reaps are a combination of regions and heaps, providing a full range of region semantics with the addition of individual object deletion. We show that our implementation of reaps provides high performance, outperforming other allocators with region-like semantics. We then use a case study to demonstrate the space advantages and software engineering benefits of reaps in practice. Our results indicate that programmers needing fast regions should use reaps, and that most programmers considering custom allocators should instead use the Lea allocator.

We recently did a follow-on study and found almost the exact same effect on several modern applications: the custom allocators typically slow down the application.

In addition to the fact that rolling your own custom memory allocator often means a performance and space hit, it also makes debugging harder and means that you can't ride the wave of improvements in general-purpose allocators -- that includes system-provided ones and others like Hoard and tcmalloc.