Reputation: 1697
I have the following (Axon) Aggregate :
@Aggregate
@NoArgsConstructor
public class Car{
@AggregateIdentifier
private String id;
@CommandHandler
public Car(CreateCar command){
apply( new CarCreated(command.getId()) );
}
@EventSourcingHandler
public void carCreated(CarCreated event) {
this.id = event.getId();
}
}
And I can create the car by submitting a CreateCar
command with a specific id, causing a CarCreated
event. That is great.
However, if I send another CreateCar
command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated
event. Which is a lie.
What would be the best approach to make sure the CreateCar
command fails if the car already exists?
Naturally I could first check the repository, but this won't prevent race conditions...
Upvotes: 3
Views: 2667
Reputation: 2890
However, if I send another CreateCar command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated event. Which is a lie.
Axon actually takes care of this for you. When an aggregate publishes an event, it is not published to other components immediately. It is staged in the Unit of Work, awaiting completion of the handler execution. After handler execution, a number of "prepare commit" handlers are invoked. One of these stores the aggregate (which is a no-op when using event sourcing), the other is publication of the events (within the scope of a transaction).
Depending on whether you use Event Sourcing or not, either adding the Aggregate instance to the persistent storage will fail (duplicate key), or the publication of the creation event will fail (duplicate aggregate identifier + sequence number).
Upvotes: 4
Reputation: 57214
What would be the best approach to make sure the CreateCar command fails if the car already exists? Naturally I could first check the repository, but this won't prevent race conditions...
There is no magic.
If you are going to avoid racy writes, then you need either to acquire a lock on the data store, or you need a data store with compare and swap
semantics.
With a lock, you have a guarantee that no conflicting updates will occur between your read of the data in the store and your subsequent write.
lock = lock_for_id id
lock.acquire
Try:
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.store car
case Some(car):
// deal with the fact that the car has already been created
}
Finally:
lock.release
You'd like to have a lock for each aggregate, but creating locks has the same racy conditions that creating aggregates does. So you will likely end up with something like a coarse grained lock to restrict access to the operation.
With compare-and-swap, you push the contention management toward the data store. Instead of sending the store a PUT, you are sending a conditional PUT.
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.replace car None
case Some(car):
// deal with the fact that the car has already been created
}
We don't need the locks any more, because we are describing precisely for the store the precondition (eg If-None-Match: *) that needs to be satisfied.
The compare and swap semantic is commonly supported by event stores; "appending" new events onto a stream is done by crafting a query that identifies the expected position of the tail pointer, with specially encoded values to identify cases where the stream is expected to be created (for example, Event Store support an ExpectedVersion.NoStream semantic).
Upvotes: 1