Nik D.
Nik D.

Reputation: 31

Redis Memory Optimization suggestions

I have a Redis Master and 2 slaves. All 3 are currently on the same unix server. The memory used by the 3 instances is approximately 3.5 G , 3 G , 3G. There are about 275000 keys in the redis db. About 4000 are hashes. 1 Set has 100000 values. 1 List has 275000 keys in it. Its a List of Hashes and Sets. The server has total memory of 16 GB. Currently 9.5 GB is used. The persistence is currently off. The rdb file is written once in a day by forced background save. Please provide any suggestions for optimizations. max-ziplist configuration is default currently.

Upvotes: 1

Views: 3573

Answers (2)

Sripathi Krishnan
Sripathi Krishnan

Reputation: 31528

Optimizing Hashes

First, let's look at the hashes. Two important questions - how many elements in each hash, and what is the largest value in those hashes? A hash uses the memory efficient ziplist representation if the following condition is met:

len(hash) < hash-max-ziplist-entries && length-of-largest-field(hash) < hash-max-ziplist-value

You should increase the two settings in redis.conf based on your data, but don't increase it more than 3-4 times the default.

Optimizing Sets

A set with 100000 cannot be optimized, unless you provide additional details on your use case. Some general strategies though -

  1. Maybe use HyperLogLog - Are you using the set to count unique elements? If the only commands you run are sadd and scard - maybe you should switch to a hyperloglog.
  2. Maybe use Bloom Filter - Are you using the set to check for existence of a member? If the only commands you run are sadd and sismember - maybe you should implement a bloom filter and use it instead of the set.
  3. How big is each element? - Set members should be small. If you are storing big objects, you are perhaps doing something incorrect.

Optimizing Lists

  1. A single list with 275000 seems wrong. It is going to be slow to access elements in the center of the list. Are you sure you list is the right data structure for your use case?
  2. Change list-compress-depth to 1 or higher. Read about this setting in redis.conf - there are tradeoffs. But for a list of 275000 elements, you certainly want to enable compression.

Tools

Use the open source redis-rdb-tools to analyze your data set (disclaimer: I am the author of this tool). It will tell you how much memory each key is taking. It will help you to decide where to concentrate your efforts on.

You can also refer to this memory optimization cheat sheet.

What else?

You have provided very little details on your use case. The best savings come from picking the right data structure for your use case. I'd encourage you to update your question with more details on what you are storing within the hash / list / set.

Upvotes: 2

Nik D.
Nik D.

Reputation: 31

We did following configuration and that helped to reduce the memory footprint by 40%

list-max-ziplist-entries 2048
list-max-ziplist-value 10000

list-compress-depth 1

set-max-intset-entries 2048

hash-max-ziplist-entries 2048
hash-max-ziplist-value 10000

Also, we increased the RAM on the linux server and that helped us with the Redis memory issues.

Upvotes: 1

Related Questions