I am using Elasticsearch 1.3. And I have a large index called index_A . There are more than 2 billion docs in index_A and more than 1.5TB. Write and read operations are both frequent.
Since the amount is too huge, there are many problems in CPU usages, memory, IO, GC, etc.
I want to optimize the index and here are some methods I'm thinking about:
JVM optimize. I am using Java8 now.
Elasticsearch configuration. I did't find much useful information until now.
Split the large index into multiple small indices by one field in the index. I tested an index with 1 billion docs and an index with 100 million and find the performance improved about 10X. Does anyone do this before?
Any suggestion?
Thanks.