Reputation: 1802
We use PersistentView in our code for a cluster singleton to ease the load on it coming from the read events. Now with PersistentView
becoming deprecated, it is suggested we would use PersistentQuery
which is based on the Stream API. We have:
The consuming actor may be a plain Actor or a PersistentActor if it needs to store its own state (e.g. fromSequenceNr offset). The corresponding query type is EventsByPersistenceId. There are several alternatives for connecting the Source to an actor corresponding to a previous PersistentView actor:
My questions are:
In PersistentView
, the events were handled in the receive block and we had a push-based system. With PersistentQuery
, each call to EventsByPersistenceId
, etc. is like a pull. How would I emulate the continual receive
behaviour in actors? Should I even do that? Is this really the way Streams are supposed to be used.
My understanding is that each call to get EventsByPersistenceId
is essentially a query. is this therefore not inefficient to do these looped queries?
I would also be interested to know why PersistentView
was dropped. was this a mere optimisation or is this part of Akka's wider move about migrating to streams and there is a paradigm shift? Am I making a mistake in trying to emulate the PersistentView
behaviour with PersistenceQuery
?
I have come across this repo which seems to provide the old PersistentView
functionality while using PersistenceQuery
behind the curtains. Would it be a good idea to use it based on the considerations in ^ ?
Upvotes: 1
Views: 449
Reputation: 1802
This repository demoes the usage of PersistenceQuery
and offers a very lightweight re-implementation of the Akka's deprecated PersistentView
with the Stream API.
Upvotes: 0
Reputation: 4965
eventsByPersistenceId
will give you an Akka Streams Source
, so it's a bit unclear what you mean by "each call". You define a stream from this source and materialize it once, and new events will be emitted by it. You can, among other things, send them to the actor replacing your PersistentView
with mapAsync
and ask
. This approach is explained on http://doc.akka.io/docs/akka/snapshot/scala/persistence-query.html#materialize-view-using-mapasync and http://doc.akka.io/docs/akka/current/scala/stream/stream-integrations.html#mapasync-ask
So from your actor's point of view it's still "push-based" and handled by receive
.
Note that mapAsync
takes a parallelism factor in the first parameter list. To process the events in the order they occur, you should set it to "1" (i.e. no parallelism). If you set it to a higher value, say n, the stream will take n events and send the messages to the actor in parallel, which means they will end up in the mailbox in random order.PersistentView
?) So you surely don't want to create a large number of these sources. But if you're interested in event from many actors, you'll more likely tag the events and then use eventsByTag
to get a source for all events with the given tag(s).PersistenceQuery
and don't see the need to emulate PersistentView
, especially as it's fairly easy to send the events from the stream to an actor as mentioned in 1.Upvotes: 2