Reputation: 2666
Following the documentation for shared memory here, I have implemented a minimal example of accessing NumPy arrays backed with shared memory in a function called by a worker process in a pool. My assumption is that this code should produce minimal memory overhead for each additional worker (there is some overhead to copy the interpreter and non-shared variables, but the 16GB of memory should not be copied.)
import numpy as np
from multiprocessing import Pool, shared_memory
from itertools import product
from tqdm import tqdm
if __name__ == "__main__":
a_shared_memory = shared_memory.SharedMemory(create=True, size=8_000_000_000)
a = np.ndarray((20, 100, 100, 100, 100), np.float32, buffer=a_shared_memory.buf)
b_shared_memory = shared_memory.SharedMemory(create=True, size=8_000_000_000)
b = np.ndarray((20, 100, 100, 100, 100), np.float32, buffer=b_shared_memory.buf)
def test_func(args):
a[args] + b[args[:-1]]
with tqdm(total=20 * 100 * 100 * 100) as pbar:
with Pool(16) as pool:
for _ in pool.imap_unordered(test_func,
product(range(20), range(100), range(100), range(100)),
chunksize=16):
pass
However, in practice when running this code memory usage grows in each process over time, both in the RES
memory metric as well as the SHR
memory metric as reported by top. (The rate of accumulation of memory can be modified with the size of the arrays being selected inside the test_func
function.)
This behavior is confusing to me – these arrays are in shared memory, and I would therefore assume that a view of them shouldn't incur any memory allocation (I am testing on linux, so no copying should occur only with reading.) Further, I don't even store the results of this computation anywhere, so it is unclear why memory is being allocated.
Two further notes:
According to this answer, even reading / accessing an array from shared memory will force a copy + write, since the refcount must be updated. However this should only affect the header memory page, which should be about 4kb. Why does memory continue to grow?
If I simply change the code in the following way:
def test_func(args):
a[args], b[args[:-1]]
the issues resolve – there is no memory overhead (ie. memory is shared,) and no increasing memory allocation over time.
I've tried to present the simplest, most intuitive application of the documentation to multiprocessing with shared memory, yet it remains very unclear to me how and why it isn't working as expected. I would like to perform some simple calculations in the test_func
, including viewing the shared memory, adding, matrix - vector multiplication etc. Any help in getting a better grasp of how to use shared memory correctly would be very appreciated.
Update:
When I change the test_func
code to a[0, 0, 0, 0] + b[0, 0, 0]
the issue disappears. Does this mean that there is some reference counter in the middle of the NumPy arrays? Such that when args
is changing, different parts of the array are accessed and memory increases, but if the indexes are always the same, the memory doesn't increase.
Upvotes: 2
Views: 952
Reputation: 50816
However, in practice when running this code memory usage grows in each process over time, both in the RES memory metric as well as the SHR memory metric as reported by top.
This is normal, but this is not because of a copy nor any allocation done from the interpreter. This is because of page faults and virtual memory. Indeed, shared memory buffer is created and have a virtual addresse space, but an operating system (OS) like Linux does not directly map it physically in RAM. This is because reserving the space would not be efficient as many application allocate space they do not fully use, or at least not directly. Linux maps the virtual pages to physical pages during the first touch, that is, during the first read or write. For security reasons, Linux fill the mapped pages with zeros, even when you just read them (since the RAM may contains password from other sensitive applications like your browser). The growing memory is due to pages being slowly filled with zeros and mapped to physical memory.
If you do not want this to happen, you can just fill the array with zeros manually using just a.fill(0)
and b.fill(0)
before the multiprocessing-based computation. On my Linux machine, this reserve the space in physical memory and to not reserve more space after that.
Note that Linux is an example of operating system doing that but Windows behave quite similarly (AFAIK MacOS too). Also note that some (rare) systems are configured to physically map the memory directly for sake of performance (eg. some game platforms and HPC systems).
When I change the
test_func
code toa[0, 0, 0, 0] + b[0, 0, 0]
the issue disappears.
This is because only the first page of the shared memory buffer is read causing only a first touch on this page (so mapped in physical memory). Other pages are still left untouched and so they are only mapped in vritual memory and not physical memory. At least on mainstream systems like your and mine.
According to this answer, even reading / accessing an array from shared memory will force a copy + write, since the refcount must be updated. However this should only affect the header memory page, which should be about 4kb. Why does memory continue to grow?
This is rather true. However, Numpy arrays do not hold the buffer so the reference counting do not impact the shared buffer but the Numpy array which are actually a view of the shared buffer. In practice, Numpy arrays are always views (although the internal buffer associated to a given array may not be shared by any other instance). Numpy is responsible for allocating and collecting the internal buffer if needed (except for shared buffers like this that are not owned by Numpy).
If I simply change the code in the following way: [...]
a[args], b[args[:-1]]
[...] the issues resolve.
This is expected, but a bit tricky to understand since it combine the magic of the OS with the one of Numpy. Indeed, a[args]
and b[args[:-1]]
are Numpy view so they do not read the memory of the shared buffer unless you read the content of view (not done here). If you write a[args][0]
, then the memory is read and the memory consumption appears to grow. The same thing is true for any Numpy function reading/writing data of the a
and b
views, like np.sum(a[args])
.
Note that the mapped shared memory must be freed using either close
on each instance or unlink
from the main process. This is critical to avoid a system resource leak.
To prove the share buffer are truly shared, one can test the following (Posix-only) program:
import numpy as np
from multiprocessing import Pool, shared_memory
import os
a_shared_memory = shared_memory.SharedMemory(create=True, size=8_000_000_000)
a = np.ndarray((20, 100, 100, 100, 100), np.float32, buffer=a_shared_memory.buf)
a[0, 0, 0, 0, 0] = 42
print('from init:', a[0, 0, 0, 0, 0])
pid = os.fork()
if pid: # parent
os.wait()
print('from parent (after the wait):', a[0, 0, 0, 0, 0])
a_shared_memory.unlink()
del a
else: # child
print('from child (before):', a[0, 0, 0, 0, 0])
a[0, 0, 0, 0, 0] = 815
print('from child (after):', a[0, 0, 0, 0, 0])
exit()
Which prints:
from init: 42.0
from child (before): 42.0
from child (after): 815.0
from parent (after the wait): 815.0
Your code does not run on my machine running on Windows. It turns out you made assumptions that are at least not portable and certainly non standard. For example, test_func
should not be accessible from sub-processes since it is located in the main section and sub-processes does not run it. As a result, there is an error. On Linux and more generally Posix platforms, processes are created using the fork
system call. Forked process are almost the same processes (like two cells after a division): from the interpreter point of view, they have a very close memory state : the children processes have a
and b
defined in the environment as well as a_shared_memory
and b_shared_memory
. Accessing to them is non standard, but it works on Posix. I think SharedMemoryManager
should be used in this context. Alterntatively, I think you can name the shared memory section so to access to them from the child process without accessing global variables (which is a very bad practice in software engineering).
Upvotes: 3