Optimizing YugabyteDB for tables with frequent deletes

33 views Asked by At

[Question posted by a user on YugabyteDB Community Slack]

Wanted to check if we should do any optimization on the db side for the tables that have frequent inserts and deletes ...like re-indexing or vacuuming etc.

Workload:

  • 300000 row inserts per hour.
  • Out of these most of the time 90% will get deleted up with in the hour and remaining will be cleaned up at the end of the day.
1

There are 1 answers

0
dh YB On

YugabyteDB uses rocksdb, which is an LSM-tree implementation. Any change, including a delete, is an addition to the memtable. Unlike PostgreSQL, where changes introduce row versions that must be cleaned up, YugabyteDB performs this principle automatically. When a memtable reaches a certain size, it is persisted as a SST file, and once the number of SST files reaches a certain amount, a background thread reads the SST files and merges them. Any changes that are old enough (>15 minutes by default) are removed because they are expired. This principle resembles PostgreSQL vacuuming.

If you do batch DML, and especially deletions with time-series, partitioning allows you to work on a logical single table, whilst the heavy transactions, such as deleting a day can be performed by removing a daily partition instead of doing it row by row.