Reputation: 101
The only overhead incurred by fork() is the duplication of the parent’s page tables and the creation of a unique process descriptor for the child. In Linux, fork() is implemented through the use of copy-on-write pages. Copy-on-write (or COW) is a technique to delay or altogether prevent copying of the data.
so why is there a need to copy page tables . as long as the processes share the pages in read only mode or until they write something there is no need that the page tables need to be copied because the translation is the same for both the parent and child??
can someone please explain..
thanks in advance
Upvotes: 10
Views: 4253
Reputation: 129464
Because COW works on the basis that the page is read-only, so we need a copy of the page-table that is all read-only. When the new process writes to somewhere, a page-fault is taken as a consequences of writing to a page that is read-only. The page-fault handler looks at the status of the page, determines whether it's supposed to be written to (if not, segfault, just like if you write to read-only in the original process) and copies the relevant original page to the new process.
The original page-table is read-write for some of the entries, so at least those will have to be copied. I do believe the entire page-table is copied (because it makes some other code simpler, and a page-table entry is not very large - four or eight bytes per page [plus one entry per 4096KB, plus one for every 4009*4096KB, etc up the hierarchy].
There are also some interesting aspects if, for example, we have some code that does:
char *ptr = malloc(big_number);
// Fill ptr[...] with some data.
if(!fork())
{
// child process works on ptr data.
...
}
else
{
free(ptr);
}
Now, the page-table entries in the parent process will be removed. If we are sharing these with the child process, we need to know that those page-table entries are shared.
Lots of other similar problems occur when receiving/sending data via network, writing to disk, swapping pages in and out, etc, etc.
Upvotes: 7