How to replace logged batch in AWS Amazon Keyspace

574 views Asked by At

I am moving my product from self-hosted cassandra node to Amazon Keyspace. One problem is that Amazon Keyspace is not supporting logged batches, cause it might use too many resources in some cases.

In my code I have multiple cases where I need to use logged batches and I cannot find any reasonable solution that might replace it.

Use case: We are having X tables to which we are doing propagation of rows, in order to have different primary keys for querying purposes. We are executing logged batch here so we have data consistency in all of those tables.

Only solution that comes to my mind is to insert same row into X tables asynchronously, and if there is any failure, execute it again, until there will be no errors.

1

There are 1 answers

0
MikeJPR On

You generally want to create a durable log of your transaction so that it can be replayed in the result of a failure. Some options are to use a messaging tier or ledger in Keyspaces.

  1. Writing to a messaging tier such as kenisis will allow you to provide at least once semantics to your tables. A consumer will be able to write to multiple tables and repeat if failed.

  2. Create a ledger in a separate Keyspaces table containing a keyvalue model. The value will the payload you want to write to the tables. Then perform the asynchronous call to your N number of tables. Finally call delete on the ledger item removing it. A separate process can scan the ledger periodically for transactions that did not process