Reputation: 115
I am building a collaborative drawing board (eg r/place): there is a grid of pixels which users can change at any time and the pixel updates get propagated to all other users online. I want to use Phoenix Channels to broadcast the pixel changes.
My question is about how to correctly send the current application state when the user connects to the service.
Currently I have an ETS table holding the drawing board state. I can update this table in MyChannel.handle_in/3
before broadcasting any pixel writes.
My fear is that in between reading the current state in MyChannel.join
, and the user being subscribed to the Channel by Phoenix, a different process updates the state.
The user would get a stale version of the application state, and they wouldn't be subscribed yet, so they wouldn't get the update through the Channel either.
To solve this, I think I need a way to atomically read the current state and then subscribe to the pubsub, ensuring that no messages get written to the ETS table or the Channel for that time period. I guess a lock? Is that Elixirey, or is there another way?
Upvotes: 3
Views: 376
Reputation: 115
While writing this question I had a look at Chris McCoord's ElixirConf 2015 training materials. I thought there was that same race condition in that example, but it turns out there isn't! That channel holds the solution.
In that example,in the Channel.join
function process sends itself an :after_join message, which later on (after being subscribed) will
trigger handle_info({:after_join...}) to read the application state and sends that to the user.
The key is to query the application state after being subscribed.
And also always change the state before publishing.
I say always because I went through every of the 24 possible orderings of:
and confirmed reading the state after subscribing, combined with updating the state before publishing, was guaranteed not to have data loss. Here my work in a gist.
It does lead to 4 possible conditions where a state change is seen twice, but that is a lot easier to deal with than data loss.
Upvotes: 3