I've created a data pipeline that pull data from S3 and push it into DynamoDB.
The pipeline started to run successfully.
I've set the write capacity to 20000 units, after few hours the writing decreased in a half, now it's still running with a write capacity of 3 units.
(The write capacity didn't change. The pipeline started at the threshold then decreased to 3 units and continued to run at this rate)
What could be the reason of the decrease? Is there a way to make it faster?
Thanks.
I am assuming here that you used the out-of-box datapipeline template to copy data from S3 to DynamoDB. That pipeline does not alter the capacity of your dynamodb table (unless you modified the pipeline to add code to increase it programatcally). So if the write capacity of the DynamoDB changed from 20000 to 3, someone must have done it manually. I would suggest that you enable CloudTrail in you AWS account so that you can find out who/when made the change whenever the same thing happens next time.