EthanAlef
EthanAlef

Reputation: 75

Preloading data into RAM for fast transaction

My thinking is that, if we preload clients' data(account no, netbalance) in advance, and whenever a transaction is processed the txn record is written into RAM in FIFO data structure, and update also the clients' data in RAM, then after a certain period the record will be written into database in disk to prevent data lost from RAM due to volatility.

By doing so the time in I/O should be saved and hance less time for seeking clients' data for the aim (faster transaction).

I have heard about in-memory database but I do not know if my idea is same as that thing. Also, is there better idea than what I am thinking?

Upvotes: 1

Views: 425

Answers (1)

user3640029
user3640029

Reputation: 165

In my opinion, there are several aspects to think about / research to get a step forward. Pre-Loading and Working on data is usually faster than being bound to disk / database page access schemata. However, you are instantly loosing durability. Therefore, three approaches are valid in different situations:

disk-synchronous (good old database way, after each transaction data is guaranteed to be in permanent storage)

in-memory (good as long as the system is up and running, faster by orders of magnitude, risk of loosing transaction data on errors)

delayed (basically in-memory, but from time to time data is flushed to disk)

It is worth noting that delayed is directly supported on Linux through Memory-Mapped files, which are - on the one hand side - often as fast as usual memory (unless reading and accessing too many pages) and on the other hand synced to disk automatically (but not instantly).

As you tagged C++, this is possibly the simplest way of getting your idea running.

Note, however, that when assuming failures (hardware, reboot, etc.) you won't have transactions at all, because it is non-trivial to concretely tell, when the data is actually written.

As a side note: Sometimes, this problem is solved by writing (reliably) to a log file (sequential access, therefore faster than directly to the data files). Search for the word Compaction in the context of databases: This is the operation to merge a log with the usually used on-disk data structures and happens from time to time (when the log gets too large).

To the last aspect of the question: Yes, in-memory databases work in main memory. Still, depending on the guarantees (ACID?) they give, some operations still involve hard disk or NVRAM.

Upvotes: 1

Related Questions