Sergey Telshevsky
Sergey Telshevsky

Reputation: 12217

MongoDB as MySQL cache

I just had this idea and thinks it's a good solution for this problem but I ask if there are some downsides to this method. I have a webpage that often queries database, as much as 3-5 queries per page load. Each query is making a dozen(literally) joins and then each of these queries results are used for another queries to construct PHP objects. Needless to say the load times are ridiculous even on the cloud but it's the way it works now.

I thought about storing the already constructed objects as JSON, or in MongoDB - BSON format. Will it be a good solution to use MongoDB as a cache engine of this type? Here is the example of how I think it will work:

  1. When the user opens the page, if there is no data in Mongo with the proper ID, the queries to MySQL fire, each returning data that is being converted to a properly constructed object. The object is sent to the views and is converted to JSON and saved in Mongo.
  2. If there was data in Mongo with the corresponding ID, it is being sent to PHP and converted.
  3. When some of the data changes in MySQL (administrator edits/deletes content) a delete function is fired that will delete the edited/deleted object in MongoDB as well.

Is it a good way to use MongoDB? What are the downsides of this method? Is it better to use Redis for this task? I also need NoSQL for other elements of the project, that's why I'm considering to use one of these two instead of memcache.

MongoDB as a cache for frequent joins and queries from MySQL has some information, but it's totally irrelevant.

Upvotes: 2

Views: 3041

Answers (2)

Sushant Gupta
Sushant Gupta

Reputation: 9458

Well you can go with Memcached or Redis for caching objects. Mongodb can be also used as a cache. I use mongodb for caching aggregation results, since it has advantage of wide range of queries as well unlike Memcached.

For example, in a tagging application, if I have to display page count corresponding to each tag, it scans whole table for a group by query. So I have a cronjob which computes that group by query and cache the aggregation result in Mongo. This works perfectly well for me in production. You can do this for countless other complex computations as well.

Also mongodb capped collections and TTL collections are perfect for caching.

Upvotes: 3

kokx
kokx

Reputation: 1706

I think you would be better off using memcached or Redis to cache the query results. MongoDB is more of a full database than a cache. While both memcached and Redis are optimized for caching.

However, you could implement your cache as a two-level cache. Memcached, for example, does not guarantee that data will stay in the cache. (it might expire data when the storage is full). This makes it hard to implement a system for tags (so, for example, you add a tag for a MySQL table, and then you can trigger expiration for all query results associated with that table). A common solution for this, is to use memcached for caching, and a second slower, but more reliable cache, which should be faster than MySQL though. MongoDB could be a good candidate for that (as long as you can keep the queries to MongoDB simple).

Upvotes: 7

Related Questions