xaxa
xaxa

Reputation: 1159

node.js keep a small in-memory database

I have an API-service in Node.js, basically what it does is gets id from request, reads record with this id from the database and returns it back in response.

While there are many clients with different ids usually only about 10-20 of them are used in a given timespan.

Is it a good idea to create an object with ids as keys and store the resulting record along with last_requested time to emulate a small database with fast-access? Whenever a record is requested I will update the last_requested field with new Date(). Also, create a setInterval() to delete those keys which were not used for some time.

Records in the database do not change often, and when they do I can restart the service (there are several instances running simultaneously via PM2, so they can be gracefully restarted).

If the required id is not found in this "database" a request to real database will be performed and the result will be stored in the object in a new key.

Upvotes: 4

Views: 4618

Answers (2)

Ali
Ali

Reputation: 22317

It's not just a good idea, caching is a necessity in different level of a computational system. Caching start from the CPU level (L1, L2, L3), OS Level up to application level which must be done by the developer.

Even if you have a well structured Database with good indexes, still there is an overhead for TCP-IP communication between your app and database. So if you are going to access some row frequently it's a must to have them in your app process.

The good news is Node.js apps are single process resident in memory (unlike PHP or other scripting programs which come and go). So you can load frequent required data and omit the database access.

The best mechanism to store the record can be an LRU (least-recently-used) cache. There are several LRU cache packages available for node.js:

In an LRU cache you can define how much memory the cache can use, expiry age of each item, and how many item it can store! or you can write your own!

Upvotes: 0

Madara's Ghost
Madara's Ghost

Reputation: 174957

You're talking about caching. And it's very useful, if

  • You have a lot of reads, but not a lot of writes. i.e. Lots of people request a record, and it changes rarely.
  • You have a lot of free memory, or not many records.
  • You have a good indication of when to invalidate the cache.

For trivial usecases (i.e. under 50 requests / second), you probably don't need an in-memory cache for the database. Moreover, database access is very fast if you use the tools the database gives you (like persistent connection pools, consistent parameterized queries, query cache, etc).

It all depends on your specific usecase. But I wouldn't do it until I actually start encountering performance problems, and determine that the database is the bottleneck.

Upvotes: 2

Related Questions