Reputation: 49
I'm creating a dashboard using Extjs 5 and azure table storage. I've done single table transactions using the table batch operations (TableBatchOperation) but now I want to do transactions across multiple tables. Is there anyway to achieve this?
There can be many creates/deletes and updates for the same table and this will affect many tables which have different partition keys.
Upvotes: 3
Views: 2943
Reputation: 3384
You may consider to apply an eventual consistency transaction pattern via message queues.
Have a look at here: https://azure.microsoft.com/en-us/documentation/articles/storage-table-design-guide/#eventually-consistent-transactions-pattern
Putting here the important sections from the link in case it expires, I also slightly updated the text not to refer to the actual diagram in the link:
Eventually consistent transactions pattern
Enable eventually consistent behaviour across partition boundaries or storage system boundaries by using Azure queues.
Context and problem
Entity Group Transactions enable atomic transactions across multiple entities that share the same partition key. For performance and scalability reasons, you might decide to store entities that have consistency requirements in separate partitions or in a separate storage system: in such a scenario, you cannot use Entity Group Transactions to maintain consistency. For example, you might have a requirement to maintain eventual consistency between:
• Entities stored in two different partitions in the same table, in different tables, in in different storage accounts.
• An entity stored in the Table service and a blob stored in the Blob service.
• An entity stored in the Table service and a file in a file system.
• An entity store in the Table service yet indexed using the Azure Search service.
Solution
By using Azure queues, you can implement a solution that delivers eventual consistency across two or more partitions or storage systems. To illustrate this approach, assume you have a requirement to be able to archive old employee entities. Old employee entities are rarely queried and should be excluded from any activities that deal with current employees. To implement this requirement you store active employees in the Current table and old employees in the Archive table. Archiving an employee requires you to delete the entity from the Current table and add the entity to the Archive table, but you cannot use an EGT to perform these two operations. To avoid the risk that a failure causes an entity to appear in both or neither tables, the archive operation must be eventually consistent.
A client initiates the archive operation by placing a message on an Azure queue. A worker role polls the queue for new messages; when it finds one, it reads the message and leaves a hidden copy on the queue. The worker role next fetches a copy of the entity from the Current table, inserts a copy in the Archive table, and then deletes the original from the Current table. Finally, if there were no errors from the previous steps, the worker role deletes the hidden message from the queue.
In this example, worker role inserts the employee into the Archive table. It could add the employee to a blob in the Blob service or a file in a file system.
An important principle for this pattern is that the actual transactions need to be idempotent. Creates and Deletes are idempotent but Updates are generally not. So they need to be given extra attention in this pattern.
Upvotes: 4
Reputation: 136196
No it is not possible out of the box. You would need to write your own logic to achieve this which can get real complex pretty easily.
Upvotes: 4