Reputation: 388
I'm playing around with some new concepts for me, event sourcing, microservices, and all this paradigma.
Suposing we have the following structure, which represents a basic event driven architecture
UI -> API -> EVENTS BROKER ->> MICROSERVICES
we make a request from the client (UI) to the server (API) the a command is executed which throws an event, which is published into the EVENTS BROKER and then each services subscribed to that specific event will start a proces right? but what if I want to also implement event sourcing, it maybe seems like this right?
UI -> API -> EVENTS BROKER ->> MICROSERVICES
-> EVENTS STORE
for this example say that I have an aggregate which is called Products
what about if after I save the event in the event store my business logic says that it should not be allowed, because I don't now maybe we only accept new products at specific days in the month, but now I've already stored the event
The question itself is, when should I save the event into the EVENT STORE in this case?
Upvotes: 2
Views: 558
Reputation: 19921
An important thing to keep in mind is that you need to separate event-sourcing from event-driven architecture. These are not the same.
The command from the UI is sent to a specific service, that service can use CQRS/Event-sourcing internally for its own record keeping (implementation detail). It may then optionally choose to publish events to other services. The UI does not send events to the system, only commands.
You use event sourcing within a bounded context and you use different events between services (to avoid coupling). Just like this picture shows:
Events on the inside is not the same as the events on the outside (between services).
Upvotes: 1
Reputation: 57257
what if I want to also implement event sourcing, it maybe seems like this right?
UI -> API -> EVENTS BROKER ->> MICROSERVICES
-> EVENTS STORE
Not usually? It's more likely to look like this:
UI -> API -> EVENTS STORE ->> EVENTS BROKER ->> MICROSERVICES
Which is to say, we normally persist the changes to our data model before we publish them, and that sequence doesn't change when we switch to event sourcing.
Event sourcing, after all, is "just" using an append only sequence of events as our data model, as opposed to a document in a document store or rows in a relational database. So it's normal that the save happens in the same place (the "transaction boundaries" are data model agnostic).
Upvotes: 2
Reputation: 20561
In event sourcing, the basic approach is that only valid events ever get saved. It's therefore up to your API when processing a command to determine what events need to be emitted and saved. The API service should be the only thing that ever decides what events get written to the event store (and it's further generally a good idea to have publication of the events to the event broker happen via a process which queries the event store: no event should be published to the broker that wasn't written by the API service to the event store).
Now, if it's later decided that products created on Tuesday the 20th shouldn't have been created, the event can't be deleted: it's a fact that the product was created. But you can have a new event, perhaps called something like ProductCreationRetracted
, with the interpretation: "oops, this product shouldn't have been created".
In general this will entail modifying anything that reads or writes events for Product
s (unless, e.g., by means of some kind of tagging, you can be sure that it will never see a ProductCreationRetracted
event.
It's also worth noting that in event sourced DDD, it's very often important to ensure that only one process is writing events for a given aggregate root (depending on the particular algebra used for deriving state from events, this requirement may be able to be loosened: if the algebra defines a conflict-free replicated data type, for instance (and if you don't know what a CRDT is, it's probably reasonable to default to assuming that you can't loosen single-writer)).
Upvotes: 5