Reputation: 142
I was following this blog on implementing Rate Limiter using Redis.
Here they have used MULTI
to pack all the atomic commands. This ensures that we're not concurrently writing wrongly to the Redis Node.
Although, the last two steps mention:
After all operations are completed, we count the number of fetched elements. If it exceeds the limit, we don’t allow the action.
We also can compare the largest fetched element to the current timestamp. If they’re too close, we also don’t allow the action.
These two commands are going to be read commands. Now if we're in a setup where we're handling millions of requests, replication becomes important.
Now if while doing the reads, I fetch from the replica, my read might be stale because of sync delay b/w master & replica.
Am i thinking in the right direction? If yes, how concerning is it & what solutions come to your mind?
Solutions that come to my mind are:-
Writing lua scripts -- which will be directly executed over master.
Enforce only these certain read queries to go to the master.
Upvotes: 1
Views: 124
Reputation: 1092
The MULTI/EXEC commands are not designed for use in Redis cluster or master-slave architectures. Transactions using MULTI/EXEC are implemented by the Redis server node, which means they will only work if you can guarantee that all commands are sent to the same node in the cluster.
In cluster mode, you need to ensure that all keys involved are in the same slot. You can achieve this by using the same hashtags for all the keys you want to execute, which forces them to be located in the same slot.
In master-slave mode, the behavior depends on how the Redis client is implemented. Some clients will send all commands in a transaction to the master node, regardless of whether they are read-only or not, while others may not. An alternative solution is to use Lua scripts, as the EVAL command is treated as a write command by all Redis clients, allowing you to execute all commands atomically in same master node.
Upvotes: 0