Reputation: 924
Supose I have 3 different Process that do different logic in a forever loop. I want to run all of them in parallel and while each Process can access a shared_object
, which is a heavy object of a class. So I tried using multiprocessing
with a manger to archive it like this:
import multiprocessing
import inspect
from multiprocessing.managers import BaseManager, NamespaceProxy
import time
import random
class SharedObject():
def __init__(self):
self.a = 1
def show_a(self):
print(self.a)
class ProcessManager(BaseManager):
pass
class ProxyBase(NamespaceProxy):
_exposed_ = ('__getattribute__', '__setattr__', '__delattr__')
class ManagerProxy(ProxyBase):
pass
def register_proxy(name, cls, proxy):
for attr in dir(cls):
if callable(getattr(cls, attr)) and not attr.startswith("__"):
proxy._exposed_ += (attr,)
setattr(proxy, attr,
lambda s: object.__getattribute__(s, '_callmethod')(attr))
ProcessManager.register(name, cls, proxy)
register_proxy('shared_object', SharedObject, ManagerProxy)
process_manager = ProcessManager()
process_manager.start()
shared_object = process_manager.shared_object()
def process_1():
while True:
print('Process 1 see {}'.format(shared_object.a))
shared_object.a = 1
time.sleep(1)
def process_2():
while True:
print('Process 2 see {}'.format(shared_object.a))
shared_object.a = 2
time.sleep(1)
def process_3():
while True:
print('Process 3 see {}'.format(shared_object.a))
shared_object.a = 3
if random.randint(0,1) == 1:
shared_object.show_a()
time.sleep(1)
first_process = multiprocessing.Process(name="First process", target=process_1)
first_process.start()
second_process = multiprocessing.Process(name="Second process", target=process_2)
second_process.start()
third_process = multiprocessing.Process(name="Third process", target=process_3)
third_process.start()
shared_object.show_a()
while True:
time.sleep(10)
It works but too slow for me since I have to pass around big numpy array. Are there any other ways to make this faster (real-time speed)? Thanks a lot
Upvotes: 0
Views: 831
Reputation: 344
It looks like it's the problem solved by multiprocessing.shared_memory
, but a) it looks like it's only python 3.8+ and b) the code would need to be restructured, at the very least:
EDIT:
Since I couldn't get it to work with python 3.7, I decided to use it with the shared memory primitives in 3.5+, Array (and Value, it could be what you need). The following code runs happily:
import time
import random
from multiprocessing import Process, Array
s1 = Array('i', [1])
def process_1():
while True:
print('Process 1 see {}'.format(s1[0]))
s1[0] = 1
time.sleep(1)
def process_2():
while True:
print('Process 2 see {}'.format(s1[0]))
s1[0] = 2
time.sleep(1)
def process_3():
while True:
print('Process 3 see {}'.format(s1[0]))
s1[0] = 3
if random.randint(0,1) == 1:
print(s1[0])
time.sleep(1)
first_process = Process(name="First process", target=process_1)
first_process.start()
second_process = Process(name="Second process", target=process_2)
second_process.start()
third_process = Process(name="Third process", target=process_3)
third_process.start()
while True:
time.sleep(10)
Getting
Process 1 see 1
Process 2 see 1
Process 3 see 1
Process 1 see 3
Process 2 see 1
Process 3 see 2
3
Process 1 see 3
Process 2 see 1
Process 3 see 2
3
[...]
I would still pass the array to the processes, something like:
def process_1(shared):
...
and then
Process(name="First process", args=(s1), target=process_1)
to make it clearer what each process is working on, though.
Also, since I've not tried it with BIG objects, I am not really sure how it would fare...
Upvotes: 1