Why Large Object Heap and why do we care?

53.1k views Asked by At

I have read about Generations and the Large Object Heap, but I still fail to understand what the significance (or benefit) is of having the Large Object Heap?

What could have gone wrong (in terms of performance or memory) if the CLR would have just relied on Generation 2 for storing large objects (considering that the threshold for Gen0 and Gen1 is too small to do so)?

5

There are 5 answers

8
Hans Passant On BEST ANSWER

A garbage collection doesn't just get rid of unreferenced objects, it also compacts the heap. That's a very important optimization. It doesn't just make memory usage more efficient (no unused holes), it makes the CPU cache much more efficient. The cache is a really big deal on modern processors, they are an easy order of magnitude faster than the memory bus.

Compacting is done simply by copying bytes. That however takes time. The larger the object, the more likely that the cost of copying it outweighs the possible CPU cache usage improvements.

So they ran a bunch of benchmarks to determine the break-even point. And arrived at 85,000 bytes as the cutoff point where copying no longer improves perf. With a special exception for arrays of double, they are considered 'large' when the array has more than 1000 elements. That's another optimization for 32-bit code, the large object heap allocator has the special property that it allocates memory at addresses that are aligned to 8, unlike the regular generational allocator that only allocates aligned to 4. That alignment is a big deal for double, reading or writing a mis-aligned double is very expensive. Oddly the sparse Microsoft info never mention arrays of long, not sure what's up with that.

Fwiw, there's lots of programmer angst about the large object heap not getting compacted. This invariably gets triggered when they write programs that consume more than half of the entire available address space. Followed by using a tool like a memory profiler to find out why the program bombed even though there was still lots of unused virtual memory available. Such a tool shows the holes in the LOH, unused chunks of memory where previously a large object lived but got garbage collected. Such is the inevitable price of the LOH, the hole can only be re-used by an allocation for an object that's equal or smaller in size. The real problem is assuming that a program should be allowed to consume all virtual memory at any time.

A problem that otherwise disappears completely by just running the code on a 64-bit operating system. A 64-bit process has 8 terabytes of virtual memory address space available, 3 orders of magnitude more than a 32-bit process. You just can't run out of holes.

Long story short, the LOH makes code run more efficient. At the cost of using available virtual memory address space less efficient.


UPDATE, .NET 4.5.1 now supports compacting the LOH, GCSettings.LargeObjectHeapCompactionMode property. Beware the consequences please.

0
Chris Shain On

I am not an expert on the CLR, but I would imagine that having a dedicated heap for large objects can prevent unnecessary GC sweeps of the existing generational heaps. Allocating a large object requires a significant amount of contiguous free memory. In order to provide that from the scattered "holes" in the generational heaps, you'd need frequent compactions (which are only done with GC cycles).

6
Myles McDonnell On

The principal is that it unlikely (and quite possibly bad design) that a process would create lots of short lived large objects so the CLR allocates large objects to a separate heap on which it runs GC on a different schedule to the regular heap. http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

1
grapeot On

The essential difference of Small Object Heap (SOH) and Large Object Heap (LOH) is, memory in SOH gets compacted when collected, while LOH not, as this article illustrates. Compacting large objects costs a lot. Similar with the examples in the article, say moving a byte in memory needs 2 cycles, then compacting a 8MB object in a 2GHz computer needs 8ms, which is a large cost. Considering large objects (arrays in most cases) are quite common in practice, I suppose that's the reason why Microsoft pins large objects in the memory and proposes LOH.

BTW, according to this post, LOH usually doesn't generate memory fragment problems.

0
oleksii On

If the object's size is greater than some pinned value (85000 bytes in .NET 1), then CLR puts it in Large Object Heap. This optimises:

  1. Object allocation (small objects are not mixed with large objects)
  2. Garbage collection (LOH collected only on full GC)
  3. Memory defragmentation (LOH is never rarely compacted)