Cassandra eats memory

782 views Asked by At

I have Cassandra 2.1 and following properties set:

MAX_HEAP_SIZE="5G"
HEAP_NEWSIZE="800M"
memtable_allocation_type: heap_buffers

top utility shows that cassandra eats 14.6G virtual memory:

KiB Mem:  16433148 total, 16276592 used,   156556 free,    22920 buffers
KiB Swap: 16777212 total,        0 used, 16777212 free.  9295960 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
23120 cassand+  20   0 14.653g 5.475g  29132 S 318.8 34.9  27:07.43 java

It also dies with various OutOfMemoryError exceptions when I am accessing it from Spark.

How I can prevent this "OutOfMemoryErrors" and reduce memory usage?

1

There are 1 answers

1
Akash Sethi On

Cassandra do eat to much memory but it can be controlled but tuning the GC [Garbage Collection] setting.

GC parameters are contained in the bin/cassandra.in.sh file in the JAVA_OPTS variable.

you can apply these settings in JAVA_OPTS

    -XX:+UseConcMarkSweepGC
  -XX:ParallelCMSThreads=1
  -XX:+CMSIncrementalMode
  -XX:+CMSIncrementalPacing
  -XX:CMSIncrementalDutyCycleMin=0
  -XX:CMSIncrementalDutyCycle=10

Or instead of specifying MAX_HEAP_SIZE and HEAP_NEWSIZE these parameter let cassandra'script specify these parameter Because it will assign best values for these parameter.