Reputation: 73
I have been running a memory intensive C++ linux application on a machine with high memory(128GB RAM). This application reserves about 20GB of memory beforehand to use it later as buffers. I wish to port this application to some kind of SBC like a Raspberry Pi where I practically have no memory to use (when compared to the usual scenario). My thought process here is to use the HDD (possibly a SDD) instead of RAM for allocating the 20GB of memory. Is there a more efficient way to do this? Also, in this approach, what should my implementation be: just binary files or something else?
Edit: I can reduce the memory allocation to less like say 2GB. Still, I do not have that kinda memory available in the RPi. Since the whole project is to port it to an SBC, I need to resolve it just with harddisk space.
Upvotes: 1
Views: 697
Reputation: 148965
I would not dare port an application written for a 128Go memory machine that actually reserves 20 Go of memory for its buffers on a low resource machine without further analyses. As secondary memory is seldom as fast as main memory, you could have to refactor the memory management of the application, with explicit swapping as we did in the 70s or the early 80s:
This pattern has been no longer used for ages, because common computer now have plenty of memory and are fast enough to let the OS provide virtual memory with swap and not bother the progammer with such a tedious task.
But as soon as you want to run a memory intensive C++ application on a Raspberry Pi, you fall in the old problems where optimization of resource usage was critic.
Upvotes: 1
Reputation: 3911
I would first consider to use mmap
to map the file as if swap.
Then, rpi uses 32-bit ARM, this means you can't have all 20GB address at the same time, so you need some kind of memory overlay method:
Bottom line, rpi is not suitable for the job, you would need to invest high development cost than simply picking a cheap PC.
Upvotes: 3