Supermacy
Supermacy

Reputation: 1479

Used Memory is greater than max memory in redis

I have set maxmemory of 4 G to redis server and eviction policy is set to volatile-lru. Currently it is using about 4.41G of memory. I don't know how this is possible. As the eviction policy is set so it should start evicting keys as soon as memory hits to max-memory. I am running redis in cluster mode having the configuration of 3 master and replication factor of 1. This is happening on only one of slave redis.

The output of

redis-cli info memory

is :-

# Memory
used_memory:4734647320
used_memory_human:4.41G
used_memory_rss:4837548032
used_memory_rss_human:4.51G
used_memory_peak:4928818072
used_memory_peak_human:4.59G
used_memory_peak_perc:96.06%
used_memory_overhead:2323825684
used_memory_startup:1463072
used_memory_dataset:2410821636
used_memory_dataset_perc:50.93%
allocator_allocated:4734678320
allocator_active:4773904384
allocator_resident:4844134400
total_system_memory:32891367424
total_system_memory_human:30.63G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:4294967296
maxmemory_human:4.00G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.01
allocator_frag_bytes:39226064
allocator_rss_ratio:1.01
allocator_rss_bytes:70230016
rss_overhead_ratio:1.00
rss_overhead_bytes:-6586368
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:102920560
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:1926964
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0

Upvotes: 0

Views: 5293

Answers (1)

Monzurul Shimul
Monzurul Shimul

Reputation: 8386

It is important to understand that the eviction process works like this:

  1. A client runs a new command, resulting in more data added.
  2. Redis checks the memory usage, and if it is greater than the maxmemory limit , it evicts keys according to the policy.
  3. A new command is executed, and so forth.

So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits. If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.

Reference: https://redis.io/topics/lru-cache

Upvotes: 3

Related Questions