How to optimize memory and heap usage Single node Elastic Search

1.6k views Asked by At

I have a single node elastic search cluster. It is receiving logs from the Kubernetes cluster through Rancher (which runs fluentd pods on k8s to collect the logs). I am running elasticsearch as a service on centos 7 and have provided 12 gb of JVM heap space and VM has 23 GB RAM. But still elastic search uses all of VM's RAM and shuts down constantly after starting for heap space error or OutOFMemory error.

can we set some config where it clears the memory/heap if it's almost completely filled to prevent crashing?

  1. Why elasticsearch is using all the RAM of VM even though only 12 gb of heap space is allocated?
  2. Why it is taking 15-20 minutes to stop the service?
  3. How can I reduce its memory consumption?
  4. How can I reduce the load on elastic search incoming data?

p.s. Thanks in advance.

1

There are 1 answers

0
Val On

What you're experiencing is a typical case of undersized heap/ram for your current usage. Given that usage, it seems you're lacking memory resources to handle the load you're throwing at your ES instance. There are two ways out:

  1. first scale vertically, i.e. increase the RAM+heap size on your node, until you reach ~30GB heap (64GB RAM)
  2. after 1, if you're node still doesn't hold the load, scale horizontally, i.e. add a new node to spread the load between two VMs

Now to answer your questions:

  1. The Elasticsearch JVM process uses the heap you have allocated, but Lucene, which is the search engine at the heart of Elasticsearch, doesn't go through the heap and leverages the direct RAM memory, which explains why you're seeing the remaining memory being used. This is explained in the first link I shared above
  2. Can be many reasons
  3. Hard to answer without knowing your use case, what you're doing and how you're sending your data
  4. Same answer

Feel free to provide more details about volumetry, frequency of indexing requests, etc