Calibrating throughput of DynamoDB tables

161 views Asked by At

I have a few tables that need throughput provisioning. Most of the time, these tables have a low background level of read and write calls. But during specific jobs, it can experience quick bursts of read/write requests.

In your opinion what's a good practice in choosing these provisioned throughput numbers? The impression I get from a description of Reserved Capacity (https://aws.amazon.com/blogs/aws/dynamodb-price-reduction-and-new-reserved-capacity-model/) is that it is basically like buying credits. Is it a good idea to periodically buy them to handle burst requests?

Thanks

1

There are 1 answers

4
b-s-d On BEST ANSWER

My suggestion is minimizing read/write bursts as much as possible as they will inevitably incur in unused reserved capacity during the idle periods.

Read Bursts: Try to isolate the most frequent accessed items in a separate table so you can provision the high throughput accordingly for those records.

Write Bursts: Throttling your write activity on your application side could help you minimize the bursts and have more direct control over your writing requests.

In case you haven't used yet, Dynamic DynamoDB can be a useful resource to have in your toolbox to automate your provisioning throughput configuration.

Also the following topics from the documentation might help you to figure out what is the best solution to your specific case: Avoid Sudden Bursts of Read Activity, Use Burst Capacity Sparingly and Distribute Write Activity During Data Upload.