Ron
Ron

Reputation: 1866

In-memory cache VS. centralized cache in a distributed system

We're currently looking for the most suitable solution for accessing critical data on a distributed system, and we're considering whether to use in memory caching, versus a centralized cache.

Some information about the data we wish to store/access:

The way we see it is as following -

In memory cache

Pros:

Cons:

Centralized cache

For the sake of conversation, we've considered using Redis.

Pros:

Cons:

Upvotes: 22

Views: 26167

Answers (3)

scientist.rahul
scientist.rahul

Reputation: 157

Seems you should use centralized cache, sitting between your DB and App Layers where all DB read/writes pass through the cache with a write-through cache invalidation scheme.

Upvotes: 0

Arpita Agarwal
Arpita Agarwal

Reputation: 51

Redis is a great option for centralized cache. It's fast and performs great. We are using it to store TBs of data.

Upvotes: 5

Karthikeyan Gopall
Karthikeyan Gopall

Reputation: 5689

I don't find any problem in going for a centralized cache using Redis.

  1. Anyway you are going to have a cluster setup so if a master fails slave will take up the position.
  2. If cache is flushed for some reason then you have to build the cache, in the mean time requests will get data from the primary source (DB)
  3. You can enable persistence and load the data persisted in the disk and can get the data in seconds(plug and play). If you think you will have inconsistency then follow the below method.

Even if cache is not available system should work (with delayed time obviously). Meaning app logic should check for cache in redis if it's not there or system itself is not available it should get the value from dB and then populate it to redis and then serve to the client.

In this way even if your redis master and slave are down your application will work fine but with a delay. And also your cache will be up to date.

Hope this helps.

Upvotes: 16

Related Questions