Reputation: 879
I believe the general guidance on event handlers that populate read models / projections is to keep them simple.
What is the guidance on performing a query from an event handler, or preferably, using a lookup domain service that returns some information needed by the view?
My specific example is an event that contains a country code, that I want to appear as a country name (and other country info) in the read model. ie. highly stable data, although not guaranteed that it could never change at some point in the future. Some thoughts:
Option 1: We could do the lookup in the command handler and add to the event as it's published. This does mean the command handler needs to use a domain service purely to populate an event, and the value might need passing to the write model that raises said event. In my eyes this seems to be polluting the write model, which I would like to avoid. To me this is the least favourable option.
Option 2: The lookup is performed by the event handler that updates the read model / view that wants the country name. Risks: it adds a db read (via the domain service) to the event handler, which is creating an additional potential failure point. Re-running the events to project the view model again could result in different state. ie. that country doesn't exist anymore. Although the risk is low, and in this may actually be preferable outcome as opposed to stale data in my use case.
Option 3: The lookup is performed in the query handler and combined with the view when requested. Risk: Complicates the query handler and adds a performance hit at the read point, not the write / event stage.
Any previous experiences that would lead someone to advise one of these options over the other?
Upvotes: 0
Views: 469
Reputation: 57289
Option 2 is the usual choice in the literature - we run an asynchronous process that collects values from one or more persistent stores and composes a new representation that is cached for use by your read-only use cases.
In effect, there's very little difference between "our" data, which we read from the book of record, and "their" data, of which we have a stale copy.
Risks: it adds a db read (via the domain service) to the event handler, which is creating an additional potential failure point.
So what? we'll just fail, and retry later. Our read only views are stale copies anyway; anybody expecting nanosecond latency or better is kidding themselves.
Put another way, we don't care about failure, what we care about is meeting our service level targets, and how quickly we burn through our error budget.
Re-running the events to project the view model again could result in different state. ie. that country doesn't exist anymore.
That was always the case anyway - from the moment you decided to do distributed processing; stale data became inevitable. Modeling time can help, as can ensuring that the semantics remain stable (we can continue to understand the semantics of a country code even though that country no longer exists.)
Upvotes: 1