Reputation: 51
With static compilation, only the functions of a library which are actually needed by a program are linked to the program. How is that with shared libraries ? Are only the functions actually needed by a program are loaded into memory by the dynamic linker, or is always the total shared library loaded ? If it is functions, how could I get the actual size of a program including its loaded functions during runtime ?
Thank You ! Oliver
Upvotes: 5
Views: 1120
Reputation: 146063
With static compilation, only the functions of a library which are actually needed by a program are linked to the program. How is that with shared libraries ?
Shared libraries are referenced by the program symbolically, that is, the program will identify, by name, the shared library it was linked with.
Are only the functions actually needed by a program are loaded into memory by the dynamic linker, or is always the total shared library loaded ?
The program will reference specific entry points and data objects in the shared library. The shared library will be mapped into memory as a single large object, but only the pages that are actually referenced will be paged in by the kernel. The total amount of the library that gets loaded will depend on both the density of references, references by other images linked to it, and by the locality of the library's own functionality.
If it is functions, how could I get the actual size of a program including its loaded functions during runtime ?
The best way on Mac and other Unix-based systems is with ps(1).
Upvotes: 8
Reputation: 490108
When you link statically, only the functions that are (potentially) called get linked into the executable -- but at run-time, the data from the executable file will be read into memory by demand paging.
When the process is created, addresses are assigned to all the code in the executable and shared libraries for that process, but the code/data from the file isn't necessarily read into physical memory at that time. When you attempt to access an address that's not currently in physical memory, it'll trigger a not-present exception. The OS virtual memory manager will react to that by reading the page from the file into physical memory, than letting the access proceed.
The loading is done on a page-by-page basis, which usually means blocks of 4 or 8 kilobytes at a time (e.g., x86 uses 4K pages, Alpha used 8k). x86 does also have an ability to create larger (4 megabyte) pages, but those aren't (at least usually) used for normal code -- they're for mapping big blocks of memory that remain mapped (semi-)permanently, such as the "window" of memory on a typical graphics card that's also mapped so it's directly accessible by he CPU.
Most loaders have some optimizations, so (for example) they'll attempt to read bigger blocks of memory when the program starts up initially. This lets it start faster than if there was an interrupt and separate read for each page of code as it's accessed. The exact details of that optimization vary between OSes (and often even versions of the same OS).
Upvotes: 2