Reputation: 26129
How to handle concurrency by eventual consistency? Or I could ask how to ensure data integrity by eventual consistency?
By CQRS and event sourcing, eventual consistency means, that you put your domain events into a queue, and you set event handlers which are projections. Those projections update the read cache in an async way. Now if you validate using that read cache, you cannot be sure that the information you base your validation on, is still valid. There can be unprocessed (or unprojected?) domain events in the queue when you send your command, which can change the outcome of the validation. So this is just another type of concurrency... What do you think, how to handle these rare concurrency issues? Domain events are already saved in the storage, so you cannot do anything about them, you cannot just remove them from the event storage (because it supposed to be write only once), and tell the user in an email, that sorry, we made up our mind and cancelled your request. Or can you?
update:
A possible solution to handle concurrency by an event storage:
by write model
if
last-known-aggregate-version < stored-aggregate-version
then
throw error
else
execute command on aggregate
raise domain-event
store domain-event
++stored-aggregate-version (by aggregate-id)
by read model
process query
if
result contains aggregate-id
then
attach read-cached-aggregate-version
by projection
process domain-event
read-cached-aggregate-version = domain-event-related-aggregate-version (by aggregate-id)
Upvotes: 1
Views: 1202
Reputation: 13246
As long as state changes you cannot assume anything will ever be 100% consistent. Technically you can ensure that various bits are 100% consistent with what you know.
Your queued domain event scenario is no different from a queue of work on a user's desk that still has to be input into the system.
Any other user performing an action dependent on the system state has no way to know that another user still needs to perform some action that may interfere with their operation.
I guess a lot is based on assuming the data is consistent and developing alternate flows and processes that can deal with these scenarios as they arise.
Upvotes: 1