Reputation: 702
On a low-memory Linux system, I have a project that consists of a single executable and a number of shared libraries. These libraries are not shared with other executables and only one instance of the executable is running at any time. I was told that this setup allows the shared libraries to be unloaded from memory when not in active use. Is this correct?
It seems to me that simply building the whole project into a single static binary (excluding system shared libraries of course) makes a lot more sense, since only one copy of each function would ever be active in memory.
Is there a difference between these two approaches?
Upvotes: 3
Views: 436
Reputation: 229098
I was told that this setup allows the shared libraries to be unloaded from memory when not in active use. Is this correct?
In a sense, yes. The kernel memory manager takes care of this if memory pressure gets high. Read-only sections (such as the code) can simply be dropped from memory and loaded back on demand from the original file when it's needed again. Other parts can be swapped out to a swap file. Wholly unused code and data will not even be loaded into memory. The granularity of what gets loaded or "unloaded" is 1 memory page, commonly 4096 bytes. (i.e. it's not per function/file or anything like that)
However it's exactly the same for the executable file as for a shared library - you don't gain anything in that regard by using shared libraries if there's only one executable using these shared libraries.
If there's several different executables using the same shared libraries, they can share the memory of the read-only parts of the shared library though, so in that scenario you can save memory. That comes at a slight cost, your shared libraries should be compiled as PIC code which normally causes the compiled code to be slightly bigger and execute a couple of % slower.
Upvotes: 6