Reputation: 3127
In my current application, we are dealing with some information which rarely changes.
For performance optimization, we want to store them in the cache.
But the problem is in invaliding these objects whenever these are updated.
We have not finalized the caching product.
As we are building this application on Azure, we will probably use Azure Redis cache
.
One strategy could be to add code in Update API
which will invalidate object in the cache.
I am not sure if this is a clean way?
We do not want to go with Cache Expiration based on time (TTL).
Could you please suggest some other strategies used for cache invalidation?
Upvotes: 36
Views: 32544
Reputation: 18514
Invalidate the cache during the Update stage is a viable approach, and was extremely used in the past.
You have two options here when the UPDATE happens:
If you want an LRU cache, then UPDATE may just delete the old value, and the first time the object will be fetched, you'll create it again after the read from the actual database. However, if you know that your cache is very small and you are using another main database for concerns different than data size, you may update directly during UPDATE.
However, all this is not enough to be completely consistent.
When you write to your DB, the Redis
cache may be unavailable for a few seconds for example, so data remains not synchronized between the two.
What do you do in that case?
There are several options you could use at the same time.
So the del-cache-on-update and write-cache-on-read is the basic strategy, but you can employ other additional systems to eventually repair the inconsistencies.
There is another option actually instead of using the above options, which is to have a background process using Redis SCAN
to verify key by key if there are inconsistencies. This process can be slow and can run against a replica of your database.
As you can see here the main idea is always the same: if an update to the cache fails, don't make it a permanent issue that will remain there potentially forever, give it a chance to fix itself at a later time.
Upvotes: 44
Reputation: 1315
I think the lambda(ish) architecture works for your use case.
For real-time updates, you will have to work on the codebase of the application to write the data to both DB and cache.
For batch data load, you can look at data ingestion tools such as logstash/fluentd to "pull" the latest data from your database and push them over to the cache. This can be done based on a column that always increments (ID number or timestamp).
I have Oracle database at my end. The Logstash JDBC plugin does a decent job in pulling the latest records. The logstash output can be formatted and printed to a file that Redis can consume. I wrote a small bash script to orchestrate this. Tested for 3 million records and works alright.
Upvotes: 1