Reputation: 1969
I am pretty new to event sourcing, and we have a domain which we consider applying Event Sourcing on.
We have an app which will be storing domain events to an Oracle DB
and the consumers of the events which would use them to generate read models (all read models will be generated in memory), those consumers will mostly use a poll model to fetch the events.
Which means that they will get a request and based on that request they will consume a stream of events and generate their read model then return it to the caller.
So for example
Event Generation API
--> Generates events for aggregates of type A and stores them in an Oracle DB.
Consumer 1
--> gets a request for a certain type A aggregate, then it fetches the events and replays them to prepare it's read model.
Consumer 2
--> does exactly the same thing but presents a different read model
Why are we using ES
But all those requirements need to be done in a poll manner, which means the consumers will request the view at a certain point in time (could be the latest or a previous one)
Question 1
Since both consumer 1
and consumer 2
are going to execute basically the same logic to replay the events, then where should the code for replaying the events be? Will we implement a common library code? Does this mean that we will have duplicate replay code across consumers?
I am worried that when we update an event schema we need to update multiple consumers
Question 2
Is this a good case of event sourcing?
Upvotes: 2
Views: 853
Reputation: 17703
Which means that they will get a request and based on that request they will consume a stream of events and generate their read model then return it to the caller.
This is a strange type of Read model, at least for me. It does not seem very fast and speed is one of the Read model's strength.
In general, Read models process events in the background, as early as possible (i.e. milliseconds after they are emitted); the results are persisted in a fast database (on disk or memory), with all the indexes applied so when the request comes the response is fast.
We need to provide historical representations of data with each change and the state of the aggregate at that change.
We need to be able to get a snapshot of an aggregate at any point in time per event basis, for example, changing a name, then we need the state at that name changed event
The state of the Aggregate should be hidden, private - the Aggregate needs a high level of encapsulation. Maybe you need an interpretation of the Events generated up until that point: this is a Read model's responsibility. The state is used only by the Aggregate to decide if and what events it will generate on the next command.
So, I suggest that you design a Read model that does exactly that: it maintains another state for each Aggregate, in a flat (non-event-sourced) persistence.
- We need to represent the diff of the state of the aggregate between points in time
Again, this should be done by a Read model.
Since both consumer 1 and consumer 2 are going to execute basically the same logic to replay the events, then where should the code for replaying the events be? Will we implement a common library code? Does this mean that we will have duplicate replay code across consumers?
But then you said: Consumer 2 --> does exactly the same thing but presents a different read model
. This means they don't basically do the same thing. If you are referring to the code that fetches the events from the Event store and feeds the Consumers, then yes, you can put that in a common library.
I am worried that we when update our event schema we need to update multiple consumers
This is a problem but one that has multiple solutions.
Is this a good case of event sourcing?
It seems that YES, it may be a case of event sourcing.
Upvotes: 3