user169808
user169808

Reputation: 513

call method on running process from parent process

I'm trying to write a program that interfaces with hardware via pyserial according to this diagram https://github.com/kiyoshi7/Intrument/blob/master/Idea.gif . my problem is that I don't know how to tell the child process to run a method.

I tried reducing my problem down to the essence of what I am trying to do can call the method request() from the main script. I just dont know how to handle two way communication like this, in examples using queue i just see data shared or i cant understand the examples

import multiprocessing 
from time import sleep

class spawn:
    def __init__(self, _number, _max):
        self._number = _number
        self._max = _max
        self.Update()

    def request(self, x):
        print("{} was requested.".format(x))

    def Update(self):
        while True:
            print("Spawned {} of {}".format(self._number, self._max))
            sleep(2)

if __name__ == '__main__':
    p = multiprocessing.Process(target=spawn, args=(1,1))
    p.start()
    sleep(5)
    p.request(2) #here I'm trying to run the method I want

update thanks to Carcigenicate

import multiprocessing 
from time import sleep
from operator import methodcaller

class Spawn:
    def __init__(self, _number, _max):
        self._number = _number
        self._max = _max
        # Don't call update here

    def request(self, x):
        print("{} was requested.".format(x))

    def update(self):
        while True:
            print("Spawned {} of {}".format(self._number, self._max))
            sleep(2)

if __name__ == '__main__':
    spawn = Spawn(1, 1)  # Create the object as normal

    p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
    p.start()

    while True:
        sleep(1.5)
        spawn.request(2)  # Now you can reference the "spawn"

Upvotes: 1

Views: 410

Answers (1)

Carcigenicate
Carcigenicate

Reputation: 45742

You're going to need to rearrange things a bit. I would not do the long running (infinite) work from the constructor. That's generally poor practice, and is complicating things here. I would instead initialize the object, then run the loop in the separate process:

from operator import methodcaller

class Spawn:
    def __init__(self, _number, _max):
        self._number = _number
        self._max = _max
        # Don't call update here

    def request(self, x):
        print("{} was requested.".format(x))

    def update(self):
        while True:
            print("Spawned {} of {}".format(self._number, self._max))
            sleep(2)

if __name__ == '__main__':
    spawn = Spawn(1, 1)  # Create the object as normal

    p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
    p.start()

    spawn.request(2)  # Now you can reference the "spawn" object to do whatever you like

Unfortunately, since Process requires that it's target argument is pickleable, you can't just use a lambda wrapper like I originally had (whoops). I'm using operator.methodcaller to create a pickleable wrapper. methodcaller("update") returns a function that calls update on whatever is given to it, then we give it spawn to call it on.

You could also create a wrapper function using def:

def wrapper():
    spawn.update()

. . .

 p = multiprocessing.Process(target=wrapper)  # Run the loop in the process

But that only works if it's feasible to have wrapper as a global function. You may need to play around to find out what works best, or use a multiprocessing library that doesn't require pickleable tasks.


Note, please use proper Python naming conventions. Class names start with capitals, and method names are lowercase. I fixed that up in the code I posted.

Upvotes: 1

Related Questions