Reputation: 11
I created a memory block with a Byte size of 10 and wanted to create a random number and put it into the Memory block but it always just gives me error messages so I wonder if I am doing it wrong.
from multiprocessing import shared_memory
import random
shared_mem_1 = shared_memory.SharedMemory(create=True, size=10)
num = (random.sample(range(1, 1000), 10))
for i, c in enumerate(num):
shared_mem_1.buf[i] = c
The error-message:
Traceback (most recent call last):
File "main.py", line 7, in <module> shared_mem_1.buf[i] = c
ValueError: memoryview: invalid value for format 'B'
Upvotes: 1
Views: 1566
Reputation: 11
I have been sharing values between two concurrently running scripts with csv files for decades without any problem. I was trying to switch to share directly. Here are my test codes with shared_memory. I posted the test codes for shared_memory_dict in another thread. The shared_memory cannot share negative values whereas _dict can. Source file : SrcArry2.py
from multiprocessing import shared_memory
from time import sleep
shm_a = shared_memory.SharedMemory(name='Tst2', create=True, size=64)
if __name__ == "__main__":
while True:
for i in range (0, 16):
try:
print(shm_a.buf[4])
except:
pass
shm_a.buf[0] = i
shm_a.buf[1] = (i + 10)
shm_a.buf[2] = (i + 20)
shm_a.buf[3] = (i * 3)
sleep(1)
Receiving file: RcvArry2.py
from multiprocessing import shared_memory
from time import sleep
shm_a = shared_memory.SharedMemory(name='Tst2', create=False, size=10)
if __name__ == "__main__":
while True:
print(shm_a.buf[0])
print(shm_a.buf[1])
print(shm_a.buf[2])
print(shm_a.buf[3])
shm_a.buf[4] = shm_a.buf[0] * 10
sleep(1)
The buf[4] is changed in receiving file. The source file has to be started before the receiving file. The buf[4] doesn't exist at that time so an exception was handled.
Upvotes: 0
Reputation: 11075
I find the most useful way to take advantage of multiprocessing.shared_memory
is to create a numpy array that uses the shared memory region as it's memory buffer. Numpy handles setting the correct data type (is it an 8 bit integer? a 32 bit float? 64 bit float? etc..) as well as providing a convenient interface (similar, but more extensible than python's built-in array
module). That way any modifications to the array are visible across any processes that have that same memory region mapped to an array.
from multiprocessing import Process
from multiprocessing.shared_memory import SharedMemory
import numpy as np
def foo(shm, shape, dtype):
arr = np.ndarray(shape, dtype, buffer = shm.buf) #remote version of arr
print(arr)
arr[0] = 20 #modify some data in arr to show modifications cross to the other process
shm.close() #SharedMemory is internally a file, which needs to be closed.
if __name__ == "__main__":
shm = SharedMemory(create=True, size=40) #40 bytes for 10 floats
arr = np.ndarray([10], 'f4', shm.buf) #local version of arr (10 floats)
arr[:] = np.random.rand(10) #insert some data to arr
p = Process(target=foo, args=(shm, arr.shape, arr.dtype)
p.start()
p.join() #wait for p to finish
print(arr) #arr should reflect the changes made in foo which occurred in another process.
shm.close() #close the file
shm.unlink() #delete the file (happens automatically on windows but not linux)
Upvotes: 0
Reputation: 10379
The problem is that num
contains values over 255 and when it's assigned to buf
the invalid value for format 'B'
error appears. Format B
is exactly the format for bytes (Check the table of formats here).
There are 2 options:
int.to_bytes
function.from multiprocessing import shared_memory
import random
shared_mem_1 = shared_memory.SharedMemory(create=True, size=10)
num = (random.sample(range(0, 255), 10))
for i, c in enumerate(num):
shared_mem_1.buf[i] = c
shared_mem_1.unlink()
For option 2 you'd need to pay attention to the bytes order (big-endian/little-endian) and how many bytes an integer has in your case (Also, the amount of memory to allocate depends on this length). The assignment to the buffer should calculate the offset it saved already.
from multiprocessing import shared_memory
import random
int_length = 4
shared_mem_1 = shared_memory.SharedMemory(create=True, size=int_length * 10)
num = (random.sample(range(1, 1000), 10))
for i, c in enumerate(num):
pos = i*int_length
shared_mem_1.buf[pos:pos+int_length] = c.to_bytes(int_length, 'big')
shared_mem_1.unlink()
Upvotes: 2