micah
micah

Reputation: 8096

DDB Throttling With Provisioned Capacity

I have a DDB table that I am trying to delete half of the keys from. I have all of the keys in hand that I want to delete and I am now using 8 processes with 32 threads each for 256 concurrency to batch delete keys.

I have been hitting throttles so I set the provisioned capacity well above the limit I'm hitting to see if that helps at all. It does not and I am still getting throttled heavily.

I do not have hot keys, and contributor insights show that I am being throttled on thousands of unique keys with at most 3-4 throttle events for the top throttled key.

I do have a GSI where each primary key may have 2-4 GSI primary keys on the flip side. But it is not showing throttling occurring on the GSI.

why

Any idea why I'd still be getting throttled?

Upvotes: 0

Views: 371

Answers (2)

Niketh Sudhakaran
Niketh Sudhakaran

Reputation: 514

We are also facing similar issues. We have around 1000 delete requests split into each 25 item delete transactions.

After raising aws ticket this was the response we received. Hope that helps in some scenarios. But did not help in our case. To mitigate this issue, you should consider:

  1. Setting up your delete request across multiple partitions in a single transaction, such that your access pattern is evenly spread [2] across all the partitions.
  2. Changing your table design (if it is feasible) by a. Using composite attributes: Try to combine more than one attribute to form a unique key, if that meets your access pattern. For example, consider an orders table with customerid+productid+countrycode as the partition key and order_date as the sort key. b. Using high-cardinality attributes: These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on [3]. c. Adding random numbers or digits from a predetermined range for write-heavy use cases [3].

References: [1] Best practices for designing and using partition keys effectively - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html [2] Designing partition keys to distribute your workload evenly - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-uniform-load.html [3] Choosing the Right DynamoDB Partition Key: https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

Upvotes: 0

Ross Williams
Ross Williams

Reputation: 632

Each DynamoDB partition supports a maximum of 1000 WCUs. If delete requests are not hitting each partition equally you will get throttled before hitting your max provisioned WCUs.

You may not have hot keys, but you can still have hot partitions. Data might not be evenly balanced across partitions either, so even a completely uniform sample of keys on your side can result in unbalanced access to DynamoDB partitions.

Upvotes: 1

Related Questions