Siddharth
Siddharth

Reputation: 111

DynamoDB use case handling

I am using dynamoDB for a project. I have a use case where I maintain timeline for objects i.e. start and end time for an object and start time for next object. New objects can be added in between two existing objects(o1 & o2) in which I will have to update start time for next object in o1 and start time for next object in new object as start time of o2. This can cause problem in case two new objects are being added in between two objects and would probably require transactions. Can someone suggest how this can be handled?

Update: My data model looks like this: objectId(Hash Key), startTime(Sort Key), endTime, nextStartTime
1, 1, 5, 4
1, 4, 6, 8
1, 8, 10, 9

So, it's possible a new entry comes in whose start time is 5. So, in transaction I will have to update nextStartTime for second entry to 5 and insert a new entry after the second entry which contains nextStartTime as start time of third entry. During this another entry might come in which also has start time between second and third entry(say 7 for eg.). Now I want the two transactions to be isolated of each other. In traditional SQL DBs it would be possible as second entry would be locked for the duration of transaction but Dynamo doesn't lock the items. So, I am wondering if I use transaction would the two transactions protect the data integrity.

Upvotes: 0

Views: 364

Answers (2)

Borislav Stoilov
Borislav Stoilov

Reputation: 3687

DynamoDB supports optimistic locking. This is achieved via conditional writes.

You can do it manually by introducing a version attribute or you can use the one provided (hopefully) by your SDK. Here is a link to AWS docs.

TLDR

  • two objects have to update the same timeline at the same time
  • one will succeed the other will fail with a specific error
  • you will have to retry the failing one

Dynamo also has transactions. However, they are limited to 25 elements and consume 2x capacity units. If you can get away with an optimistic lock go for it.

Hope this was helpful

Update with more info on transactions From this doc

Error Handling for Writing Write transactions don't succeed under the following circumstances:

When a condition in one of the condition expressions is not met.

When a transaction validation error occurs because more than one action in the same TransactWriteItems operation targets the same item.

When a TransactWriteItems request conflicts with an ongoing TransactWriteItems operation on one or more items in the TransactWriteItems request. In this case, the request fails with a TransactionCanceledException.

When there is an insufficient provisioned capacity for the transaction to be completed.

When an item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.

When there is a user error, such as an invalid data format.

They claim that if there are two ongoing transactions on the same item, one will fail.

Upvotes: 2

hunterhacker
hunterhacker

Reputation: 7132

Why store the nextStartTime in the item? The nextStartTime is simply the start time of the next item, right? Seems like it'd be much easier to just pull the item as well as the next item to get the full picture at read-time. With a Query you can do this in one call, and so long as items are less than 2 KB in size it wouldn't even consume more RCUs than a get item would.

Simpler design, no cost for transactional writes, no need to do extensive testing on thread safety.

Upvotes: 2

Related Questions