Nishat
Nishat

Reputation: 921

Memcache eviction policy

As per my understanding.

  1. In Memcache whole memory is divided in fixed size pages and each page belongs to a particulate slab class.Each page is further divided into fixed size chunk,size of chunk is decided as per slab class.Data is stored in chunk that best suit for it(To minimise internal fragmentation).

  2. Memcache uses LRU strategy for eviction.

somewhere I have read that LRU strategy is applied on frame instead of whole data.So,there can be case where a frame (of different class) is free but still eviction is happening on a frame.

Is this the way memcache behaves?Instead of eviction, Should not a vacant frame's chucks be resized?

Upvotes: 3

Views: 6011

Answers (1)

arunk2
arunk2

Reputation: 2416

As you have mentioned, memory is divided in fixed size pages (1MB or Max item configured size) and each page belongs to a particulate slab class. In the begining these pages are not actually assigned to any particular slab class. As and when requests are coming, slab classes are created per size range and pages are attached to them.

Identifying the optimal slab class is the key thing. The algorithm of it roughly looks like:

  1. Check the tail page (the most recently allocated page in this slab class) to see if it has open/free chunks.
  2. If found, store this object in that page and return.
  3. Else, find an unassigned page and assign it to this slab class.
  4. If no unassigned pages are available(this doesn't mean that all the memory is completely full), do the LRU logic to free up a chunk that hasn't been used in a while (or is expired).
  5. Store this object on either the just freed chunk (from step 4) or on the empty page (from step 3).

Now, to answer your question.

Evictions will only occur, when all the pages are allocated to some slab classes. Since, all the pages are binded with some slab classes and are ready to accept objects/items in that size range. The remaining free memory cannot be used for other sized objects. i.e. they cannot be resized. This is implemented for simplicity I guess. This method works great for many web applications, where object sizes are nearly the same size like query results, rendered html pages. In that case, Memcache will allocate all of the pages it has to very few classes you use.

I remember reading somewhere, Redis is more flexible in this aspect of resize of memory.

Hope it helps!

Upvotes: 2

Related Questions