Using rowMapper Configuration and using new ClientConfig()
and AmazonDaxClient
I'm facing trouble syncing the dax cluster with my table(s). I know that I have to go through a double hop to get back the query results if it had been updated around the dax in-memory cache. To work around this problem when I have already large amounts of data written like this, How would I sync it to my DAX Cluster without a client querying it?
So, I thought of doing a throttled table scan connected to my dax endpoint. This is only returning in-cache objects only. No updates/insertions are being reflected via the table scan through the dax.
Any help?
As given in the AWS DAX use cases. Dax isn't ideal for consistent reads. For this reason:
Having discussions with an AWS solutions Expert. This was indeed the case. When doing a scan operation on DAX. An outside application could have written directly onto the dynamoDB table. Respective to the SCAN operation on the cache already contains a hit, the result is returned and no cache miss is reported and the result is returned as is. This ideally will be eventually consistent with LRU on the scan.
As DAX directly reads from the cache and only checks for boolean cache hit or miss and doesn't validate the contents. The only possible way is to have client side logic as mentioned in the website to handle the same.