user1715925
user1715925

Reputation: 607

Best way to use shared memory that can be read directly by different processes without copy overhead

I have an application where the initialization involves reading very large data from files (~ > 10 GB) and then performing some computations on this data (which is of type Dictionary). The initialization step takes a couple of hours every time even though my data is fixed/never changed. What I'd like to do is somehow use a process to pre-load these data into memory and another process on the same machine to read directly from it all data only once, and without any COPY. So far, I have found a couple methods:

What's the most effective way for my scenario?

Upvotes: 1

Views: 974

Answers (2)

CSharpYouDull
CSharpYouDull

Reputation: 239

If you're putting the data into a dictionary why not use any of the popular nosql key value stores (couchbase, riak, redis) then any process could work with the data. If you're totally opposed to that idea you could always use the NancyFx framework to host a local rest service endpoint in the "Host" application then any other applications that need to make use of the pre loaded data could interact with the services provided by the host.

Upvotes: 1

bansi
bansi

Reputation: 57062

I don't know how you are going to keep 10GB of data in memory efficiently. Whatever approach you take 10GB of data in memory, is going to use system cache too often and is going to slow down your entire system.

I would suggest using a database if you can. If you cannot use database try to store your initialized data and read parts as and when needed with some caching.

Upvotes: 0

Related Questions