Farhad
Farhad

Reputation: 388

Redis RSS 2.7GB and increasing. Used memory is only 40MB. why?

Redis version is 3.2. Used memory is showing as around 43MB, while the used RSS is about 2.7G and increasing. Not able to understand why this is so.

Amount of keys are also not that much:

# Keyspace
db0:keys=4613,expires=62,avg_ttl=368943811

INFO memory

# Memory
used_memory:45837920
used_memory_human:43.71M
used_memory_rss:2903416832
used_memory_rss_human:2.70G
used_memory_peak:2831823048
used_memory_peak_human:2.64G
total_system_memory:3887792128
total_system_memory_human:3.62G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:63.34
mem_allocator:jemalloc-3.6.0

free -h

             total       used       free     shared    buffers     cached
Mem:          3.6G       3.2G       429M       152K       125M        92M
-/+ buffers/cache:       3.0G       647M
Swap:           0B         0B         0B

Restarting the process is not an option on the live production system. Need a way to resolve this memory usage.

Upvotes: 1

Views: 2741

Answers (1)

Turn
Turn

Reputation: 7020

Even though the current usage is only 43M, at some point the usage was much higher:

used_memory_peak:2831823048
used_memory_peak_human:2.64G

so it isn’t terribly surprising that your RSS footprint is so high. It’s possible that, even though Redis isn’t using the memory anymore, the allocator just hasn’t released the memory back to the OS yet. Reds v4 has a MEMORY PURGE command to tell the allocator to release the memory it isn’t using, but unfortunately that isn’t available to you on v3.2.

It’s also possible you have an issue with fragmentation. If the memory you’re still using is fragmented across many of the pages that were part of the large allocation, then you are actually using all of those pages. There is an experimental memory defragmenter in v4, but again, that doesn’t really help you.

You said that restarting the server wasn’t an option, but if that is only because you can’t suffer any downtime, you could consider bringing up a slave node, replicate, and promote it to the master node. This would fix both the fragmentation and unreleased memory issues.

But another question is whether the large RSS footprint is a problem for you. It could be slowing Redis down a bit, but have you determined that this is a problem in your system?

Upvotes: 2

Related Questions