How to return memory from process to the OS

10.2k views Asked by At

I have an issue with memory management in various operating systems.

My program is a server that does some processing that could take a few GB of memory. After that, it releases most of the memory while it waits for a few hours until another request arrives.

On AIX and Solaris, I observe the following behavior,

When I free memory, the memory is not returned back to the operating system. The amount of virtual memory used by a process always increases - never decreases. The same is true for the physical memory, up to its limit. Thus it appears that we use all this memory in sleep mode as well.

When this memory can be returned back to OS? How can I make it?

Linux is different: it appears that is does return memory sometimes, but I'm not able to understand when and how. I have for example a scenario in which the process before a request was 100MB, then 700MB at the peak, and after releasing all that it was down to 600MB. I don't understand it - if Linux gives back memory to the OS, why not all of it?

5

There are 5 answers

0
DThought On

the way memory is allocated (and maybe given back to the OS) is in the libc, i assume. The Programming language / library stack you are using might be of reason for this.

I assume glibc will return non-fragmented memory at the top of the heap. Your process might allocate 10MB of data it will use all the time. After that, 500MB of data that is used in processing will be allocated. After that, a tiny fragment of data that is kept even after the processing (might be the result of the processing) is allocated. After that another 500MB is allocated Memory layout is:

|10MB used|500 MB processing|1MB result|500MB processing| = 1011 MB total

When the 1000MB are freed, the memory layout is

|10MB used|500MB freed|1MB result|500 MB freed| glibc might now return the memory at the end... |10MB used|500MB freed|1MB result| = 511 MB "in use" also only 11MB of that is used.

I assume that is what happens, you'll need to do further research (seperate memory pools come to mind) to ensure all memory will be freed

1
Brian Agnew On

I think the only reliable and portable way to do this is to spawn a spawn a new process to handle your request. Upon process exit the OS will reap all the associated memory.

Unfortunately you then have the inefficiencies related to spawning this process and inter-process communication (I note you're doing a lot of processing - I don't know if this implies that your inter-process communication has sizable data requirements). However you will get the memory behaviour you require. Note the OS shouldn't duplicate the memory consumed by the actual JVM, provided you spawn an identical JVM binary image.

4
Jens Kilian On

The glibc library (which is normally used as the standard C library in Linux) can allocate memory in two ways - with sbrk() or with mmap(). It will use mmap() for large enough allocations.

Memory allocated with sbrk() cannot easily be given up again (only in special cases, and as far as I know glibc doesn't even try). Memory allocated with mmap() can be returned using munmap().

If you depend on being able to return memory to the OS, you can use mmap() directly instead of malloc(); but this will become inefficient if you allocate lots of small blocks. You may need to implement your own pool allocator on top of mmap().

0
Axel On

Most of the times, memory won't be returned to the system until the process terminates. Depending on operating system and runtime library, memory might be given back to the system, but I don't know of any reliable way to make sure this will happen.

If processing requires a few GB of memory, have your server wait for the request, then spawn a new process for processing the data - You can communicate with your server using pipes. When processing is done, return the result and terminate the spawned process.

0
AudioBubble On

You should looks how pagination works. You can't return memory if it's less than getpagesize().