Reputation: 2041
This question is in regards to concurrent access to saga data, when saga data is persisted in Azure Table Storage. It is also references information found in Particular's documentation: http://docs.particular.net/nservicebus/nservicebus-sagas-and-concurrency
We've noticed that, within a single saga executing handlers concurrently, modifications to saga data appear to be operating in a "last one to post changes to azure table storage wins" scenario. Is this intended behavior when using NSB in conjunction with Azure Table Storage as the Saga data persistence layer?
Example:
Also, as Azure Table Storage supports Optimistic Concurrency, is it possible to enable the use of of this feature for Table Storage just as it is enabled for RavenDB when Raven is used as the persistence tech?
If this is not possible, what is the recommended approach for handling this? Currently we are subscribing to the paradigm that any handler in a saga that could ever potentially be handling multiple messages concurrently is not allowed to modify saga data, meaning our coordination of saga message is being accomplished via means external to the saga rather than using Saga Data as we'd initially intended.
Upvotes: 3
Views: 549
Reputation: 2041
After working with Particular support - the symptoms described above ended up being an defect in NServiceBus.Azure. This issue has been patched by Particular in NServiceBus.Azure 5.3.11 and 6.2+. I can personally confirm that updating to 5.3.11 resolved our issues.
For reference, a tell-tale sign of this issue manifesting itself is the following exception getting thrown and not getting handled.
Failed to process message Microsoft.WindowsAzure.Storage.StorageException: Unexpected response code for operation : 0
The details of the exception will indicate that "UpdateConditionNotSatisfied" - referring to the optimistic concurrency check.
Thanks to Yves Goeleven and Sean Feldman from Particular for diagnosing and resolving this issue.
Upvotes: 1
Reputation: 2185
The azure saga storage persister uses optimistic concurency, if multiple messages arrive at the same time, the last one to update should throw an exception, retry and make the data correct again.
So this sounds like a bug, can you share which version you're on?
PS: last year we have resolved an issue that sounds very similar to this https://github.com/Particular/NServiceBus.Azure/issues/124 it has been resolved in NServiceBus.Azure 5.2 and upwards
Upvotes: 0