How to consume a high volume topic as KTABLE without exhausting memory/disk space?

172 views Asked by At

We have a kstreams app doing kstream-kstable inner join. Both the topics are high volume with 256 partitions each. kstreams App is deployed on 8 nodes with 8 GB heap each right now. The state store (rocksdb) persists to disk and we are running out of disk space on the containers. What are some of the options to consume data from one of the topics as KTABLE, but limit the amount of data (like if we want to hold only a days worth of keys/data or some time frame) on disk and have the previous state/files get deleted?

0

There are 0 answers