stustd
stustd

Reputation: 335

Callback from "multiprocessing" with CFFI segfaults after ~100 iterations

A PyPy callback, that works perfectly (in an infinite loop) when implemented (straightforwardly) as method of a Python object, segfaults after approximately 100 iterations when I move the Python object into a separate multiprocessing process.

In the main code I have:

import multiprocessing as mp

class Task(object):

    def __init__(self, com, lib):

        self.com = com # communication queue
        self.lib = lib # ffi library
        self.proc = mp.Process(target=self.spawn, args=(self.com,))
        self.register_callback()

    def spawn(self, com):
        print('%s spawned.'%self.name)
        # loop (keeping 'self' alive) until BREAK:
        while True:
            cmd = com.get()
            if cmd == self.BREAK:
                break
        print("%s stopped."%self.name)

    @ffi.calback("int(void*, Data*"):   # old cffi (ABI mode)
    def callback(self, data):
        # <work on data>
        return 1

    def register_callback(self):
        s = ffi.new_handle(self)
        self.lib.register_callback(s, self.callback)  # C-call

The idea is that multiple tasks should serve an equal number of callbacks concurrently. I have no clue what may cause the segfault, especially since it runs fine for the first ~100 iterations or so. Help much appreciated!

Upvotes: 0

Views: 350

Answers (1)

stustd
stustd

Reputation: 335

Solution

Handle 's' is garbage collected when returning from 'register_callback()'. Making the handle an attribute of 'self' and passing keeps it alive.

Standard CPython (cffi 1.6.0) segfaulted at the first iteration (i.e. gc was immediate) and provided me a crucial informative error message. PyPy on the other hand segfaulted after approximately 100 iterations without providing a message... Both run fine now.

Upvotes: 0

Related Questions