Reputation: 33
It's my understanding that best practice for redis involves many keys with small values.
However, we have dozens of keys that we'd like to have store a few MB each. When traffic is low, this works out most of the time, but in high traffic situations, we find that we have timeout errors start to stack up. This causes issues for all of our tiny requests to redis, which were previously reliable.
The large values are for optimizing a key part of our site's functionality, and a real performance boost when it's going well.
Is there a good way to isolate these large values so that they don't interfere with the network I/O of our best practice-sized values?
Note, we don't need to dynamically discover if a value is >100KB or in the MBs. We have a specific method that we could have use a separate redis server/instance/database/node/shard/partition (I'm not a hardware guy).
Upvotes: 3
Views: 353
Reputation: 336
The correct solution would would be to have 2 separate redis clusters, one for big sized keys, and another one for small sized keys. These 2 clusters could run on the same set of physical or virtual machines, aka multitenancy (You would want to do that to fully utilize the underlying cores on your machine, as redis server is single threaded). This way you would be able to scale both the clusters separately, and your problem of small requests timing out because of queueing behind the bigger ones will be alleviated.
Upvotes: 0
Reputation: 50112
Just install/configure as many instances as needed (2 in the case), each managing independently on a logical subset if keys (e.g. big and small), with routing done by the application. Simple and effective - divide and converter conquer
Upvotes: 0