Reputation: 2495
How to implement a LRU Cache with Erlang?
Top starred Github project was fogfish/cache, but Segmented table was not quite fit for my data.
barrel-db/erlang-lru was using a List. after testing, it would be slow if there were too much data.
I guess the problem was here.
move_front(List, Key) ->
[Key | lists:delete(Key, List)].
With Java, a better implementation was using a hashmap and a linkedlist like this
I tried to do a linkedlist, and then realized that Linkedlist was not good idea for Erlang, like this thread.
the question is how to do a LRU cache with Erlang?
Upvotes: 4
Views: 1044
Reputation: 3825
I've implemented LRU cache using pseudo-time approach (full implementation is available here https://github.com/poroh/erl_lru)
I have two data structure:
#{key() => {order(), value()}}
gb_tree(order(), key())
Where order()
is pseudo-time:
Operations:
All operations has O(log(N)) complexity because of using gb_tree
.
Add element (Key, Value):
Update element (Key):
Check overflow:
Upvotes: 0
Reputation: 11
The first implementation of THE CACHE was based on ETS with two indexes. One ets table is hold TTL -> Key
relation, another ets table is Key -> Object
. You can see the implementation at
https://github.com/fogfish/cache/commit/8cc50bffb4178ad9ad716703507c3290e1f94821
The maintenance of two index was not efficient thus segmented cache outperform original implementation. I would not recommend to implement per-object TTL using Erlang data structures unless you can model your data within actors and accepts overhead. There is an implementation to address it. It is uses process per object concept:
https://github.com/fogfish/pts
Otherwise, you need to implement NIF
Upvotes: 1