Reputation: 781
I have a script which is run by celery worker which uses Pool from billiard library and I spawned multiple process. I am trying to use sentry inside those process so that any unhandled/handled exception can be caught. Written below is my sample code :
from configurations import SENTRY_CLIENT
def process_data(data):
try:
s = data/0
except ZeroDivisionError:
print "Sentry must report this."
SENTRY_CLIENT.captureException()
import multiprocessing
from billiard import Pool
POOL_SIZE=multiprocessing.cpu_count()
pool = Pool(POOL_SIZE)
data=[0, 1, 2, 3, 4, 5]
pool.map(process_data, data)
pool.close()
pool.terminate()
SENTRY_CLIENT is defined in configuration file which is defined as : configurations.py
from raven import Client
SENTRY_CLIENT = Client("dsn")
One way which I am trying is to pass SENTRY_CLIENT to each process but I am trying to avoid that as of now.
Also as this script is executed by celery worker I had configured sentry for celery and any exception till pool.map()
is well caught by sentry.
Also I tried to print the SENTRY_CLIENT.__dict__
and I got the valid items with correct values.
My issue is here that why SENTRY_CLIENT is not sending exceptions to sentry dashboard. May be I am missing something in configurations.
Upvotes: 3
Views: 2003
Reputation: 124
As PoloSoares said, you should change a transport instead of adding any delay by sleep. Example of valid solution for 6.10.0 version of raven lib:
import multiprocessing
from billiard import Pool
from raven import Client
from raven.transport.http import HTTPTransport
SENTRY_CLIENT = Client("dsn", transport=HTTPTransport)
def process_data(data):
try:
s = data / 0
except ZeroDivisionError:
print("Sentry must report this.")
SENTRY_CLIENT.captureException()
POOL_SIZE = multiprocessing.cpu_count()
pool = Pool(POOL_SIZE)
data = [0, 1, 2, 3, 4, 5]
pool.map(process_data, data)
pool.close()
pool.terminate()
Upvotes: 2
Reputation: 781
I finally get the solution by some readings. Sentry works on an async event based model and killing a process just after triggering sentry will not ensure the exception is reached to the servers. Hence we need to add a delay(10s), before killing the process in case of any exception to ensure sentry does it's job.
def process_data(data):
from configurations import SENTRY_CLIENT
try:
s = data/0
except ZeroDivisionError:
print "Sentry must report this."
import time
SENTRY_CLIENT.captureException()
time.sleep(10)
import multiprocessing
from billiard import Pool
POOL_SIZE=multiprocessing.cpu_count()
pool = Pool(POOL_SIZE)
data=[0, 1, 2, 3, 4, 5]
pool.map(process_data, data)
pool.close()
pool.terminate()
Upvotes: 4