Dave New
Dave New

Reputation: 40002

Transactional inserts across table partitions

I am storing an entity in two Azure Storage tables. The data is identical in both tables except that the RowKey and PartitionKey are different (this is for querying purposes).

Problem

When inserting into these tables, I need the operation to be transactional - the data must only commit if both inserts are successful.

CloudTable.ExecuteBatch(..) only works when entities belong to the same partition.

Is there no other way of doing this?

Upvotes: 0

Views: 99

Answers (1)

Gaurav Mantri
Gaurav Mantri

Reputation: 136196

Short answer:

Unfortunately no. Entity batch transactions are only supported for just 1 table at a time and some other restrictions.

Long answer:

We too have faced similar problem where we had to insert data across multiple tables. One thing we have done is we have tried to implement some kind of eventual consistency. Instead of writing data directly into the table, we write that data in a queue and have a background worker role process that data. Once the data is written in the queue, we assume that data will eventually be persisted (there's a caching engine also involved which is updated with the latest data here as well so that application can get the latest data). Background worker role keeps on retrying inserts (using InsertOrReplace semantic instead of just Insert semantic) and once all the data is written, we simply delete the message from the queue.

Upvotes: 2

Related Questions