I have a single node elastic search cluster. It is receiving logs from the Kubernetes cluster through Rancher (which runs fluentd pods on k8s to collect the logs). I am running elasticsearch as a service on centos 7 and have provided 12 gb of JVM heap space and VM has 23 GB RAM. But still elastic search uses all of VM's RAM and shuts down constantly after starting for heap space error or OutOFMemory error.
can we set some config where it clears the memory/heap if it's almost completely filled to prevent crashing?
- Why elasticsearch is using all the RAM of VM even though only 12 gb of heap space is allocated?
- Why it is taking 15-20 minutes to stop the service?
- How can I reduce its memory consumption?
- How can I reduce the load on elastic search incoming data?
p.s. Thanks in advance.
What you're experiencing is a typical case of undersized heap/ram for your current usage. Given that usage, it seems you're lacking memory resources to handle the load you're throwing at your ES instance. There are two ways out:
Now to answer your questions:
Feel free to provide more details about volumetry, frequency of indexing requests, etc