Deepankar Singh
Deepankar Singh

Reputation: 674

DynamoDB concurrent write

I have an existing DynamoDB table which has attributes say

---------------------------------------------------------
hk(hash-key)| rk(range-key)|   a1    |    a2   |    a3   | 
---------------------------------------------------------


I have an existing DynamoDb client which will only update existing record for a1 only. I want to create a second writer(DDB client) which will also update an existing record, but, for a2 and a3 only.
If both the ddb client tries to update same record (1 for a1 and other for a2 and a3) at the exact same time, will DynamoDb guarantee that all a1 a2 a3 are updated with correct value(all three new values)? Is using save behavior UPDATE_SKIP_NULL_ATTRIBUTES sufficient for this purpose or do I need to implement some kind of optimistic locking? If not, Is there something that DDB provides on the fly for this purpose?

Upvotes: 10

Views: 16956

Answers (3)

PersianIronwood
PersianIronwood

Reputation: 790

Consider using this distributed locking library, https://www.npmjs.com/package/dynamodb-lock-client, here is the sample code we use in our codebase:

const DynamoDBLockClient = require('dynamodb-lock-client');

const PARTITION_KEY = 'id';
const HEARTBEAT_PERIOD_MS = 3e3;
const LEASE_DURATION_MS = 1e4;
const RETRY_COUNT = 1e2;

function dynamoLock(dynamodb, lockKey, callback) {

  const failOpenClient = new DynamoDBLockClient.FailOpen({
    dynamodb,
    lockTable: process.env.LOCK_STORE_TABLE,// replace this with your own lock store table
    partitionKey: PARTITION_KEY,
    heartbeatPeriodMs: HEARTBEAT_PERIOD_MS,
    leaseDurationMs: LEASE_DURATION_MS,
    retryCount: RETRY_COUNT,
  });

  return new Promise((resolve, reject) => {
    let error;

    // Locking required as several lambda instances may attempt to update the table at the same time and
    // we do not want to get lost updates.
    failOpenClient.acquireLock(lockKey, async (lockError, lock) => {
      if (lockError) {
        return reject(lockError);
      }

      let result = null;
      try {
        result = await callback(lock);
      } catch (callbackError) {
        error = callbackError;
      }

      return lock.release((releaseError) => {
        if (releaseError || error) {
          return reject(releaseError || error);
        }
        return resolve(result);
      });
    });
  });
}

async function doStuff(id) {
  await dynamoLock(dynamodb, `Lock-DataReset-${id}`, async () => {
    // do your ddb stuff here
  });
}

Upvotes: 0

F_SO_K
F_SO_K

Reputation: 14889

If you happen to be using the Dynamo Java SDK you are in luck, because the SDK supports just that with Optimistic Locking. Im not sure if the other SDKs support anything similar - I suspect they do not.

Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in DynamoDB. If you use this strategy, then your database writes are protected from being overwritten by the writes of others — and vice-versa.

Upvotes: 3

Yogesh_D
Yogesh_D

Reputation: 18809

Reads to DyanmoDB are eventually consistent. See this: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html

DynamoDB supports eventually consistent and strongly consistent reads.

Eventually Consistent Reads

When you read data from a DynamoDB table, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If you repeat your read request after a short time, the response should return the latest data.

Strongly Consistent Reads

When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful. A strongly consistent read might not be available if there is a network delay or outage.

Note DynamoDB uses eventually consistent reads, unless you specify otherwise. Read operations (such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this parameter to true, DynamoDB uses strongly consistent reads during the operation.

Basically you have specify that you need to have strongly consistent data when you read.

And that should solve your problem. With consistent reads you should see updates to all three fields.

Do note that there are pricing impacts for strongly consistent reads.

Upvotes: -2

Related Questions