Reputation: 157
I setup a DynamoDB in AWS. I would like to know the throughput impact of choosing strongly consistent reads
Strongly consistent reads use more throughput than eventually consistent reads or its the other way around?
Upvotes: 3
Views: 227
Reputation: 200446
This is well documented here:
Capacity Units Required For Reads: Number of item reads per second × 4 KB item size
(If you use eventually consistent reads, you'll get twice as many reads per second.)
So strongly consistent reads use twice as much provisioned throughput as eventually consistent reads.
Upvotes: 0
Reputation: 2677
Eventually Consistent Reads
When you read data (GetItem, BatchGetItem, Query or Scan operations), the response might not reflect the results of a recently completed write operation (PutItem, UpdateItem or DeleteItem). The response might include some stale data. Consistency across all copies of the data is usually reached within a second; so if you repeat your read request after a short time, the response returns the latest data. By default, the Query and GetItem operations perform eventually consistent reads, but you can optionally request strongly consistent reads. BatchGetItem operations are eventually consistent by default, but you can specify strongly consistent on a per-table basis. Scan operations are eventually consistent by default.
Strongly Consistent Reads
When you issue a strongly consistent read request, DynamoDB returns a response with the most up-to-date data that reflects updates by all prior related write operations to which DynamoDB returned a successful response. A strongly consistent read might be less available in the case of a network delay or outage.
For the GetItem, Query or Scan operations, you can request a strongly consistent read result by specifying optional parameters in your request.
You can find it here
Upvotes: 1