We are running a single process in an OpenShift environment and noticed that the longer the process is running the more memory it allocates until touching the upper limit of the POD which causes the process to be restarted. Previously we have been thinking of a memory leak but when we run the same process in a Linux box with limited physical memory the operating system automatically frees unused pages once we are getting close to this limit and the memory consumption remains stable. This is not the case in OpenShift as the machine behind has much more memory than defined in the YAML file so it reaches the defined limit before it can start to reorganize its memory. There is also no other process running that could get the OS to lower the used memory of the primary process.
Is there a way to tell the OS to optimize the memory other than reaching the physical limit?
We have already tried to limit the users available memory with ulimit -m or ulimit -v but the first one is simply ignored and the second does not trigger the freeing of unused pages but just kills the process when it reaches the virtual memory limit. One Idea was also using cgroups but in the OpenShift terminal it does not know the cgexec command.