Reputation: 157
I have a Django application that I'm running on Docker. I'm trying to launch an APScheduler scheduler when I run the docker container.
I created a scheduler and I simply added it a job that I called test1, and that sends an email to my address.
This is the Python script that is launched when I run the container.
from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.schedulers.background import BackgroundScheduler
#scheduler = BlockingScheduler()
scheduler = BackgroundScheduler()
def test1():
... (code to send email)
scheduler.add_job(test1, 'interval', seconds = 20)
scheduler.start()
This is the results I obtained with each of the two kind of schedulers:
Since the emails were sent in one of the two cases I guess the problem is neither Django nor Docker related, but purely about APScheduler. I did my research but I couldn't find why the BackgroundScheduler didn't work as in the tutorials I read, the developper set up the scheduler the same way I did.
Any help would be much appreciated, thanks!
UPDATE 1
I tried the two following things, both made the BackgroundScheduler behave like a BlockingScheduler (which is not what I want)
1) Setting the daemon option to False when initialising the scheduler instance:
scheduler = BackgroundScheduler(daemon = False)
2) "Trying to keep the main thread alive", as explained in these:
how-do-i-schedule-an-interval-job-with-apscheduler
apscheduler-inside-a-class-object
I added this right after scheduler.starts():
while True:
time.sleep(1)
scheduler.shutdown()
UPDATE 2
When I try to setup a BackgroundScheduler in a single Python file (outside of any application context), it works very well:
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.schedulers.blocking import BlockingScheduler
def test1():
print('issou')
scheduler = BackgroundScheduler()
scheduler.start()
scheduler.add_job(test1, 'interval', seconds=5)
print('yatangaki')
'yatangaki' is first printed, and then 'issou' every 5 seconds, so everything seems fine.
UPDATE 3
Now I've tried to start the scheduler on a Django app that I ran locally with python manage.py runserver
, without using Docker.
It works perfectly: the emails are sent and I can access the main view of the application.
Note: the BackgroundScheduler is started by a function called start_test1
. In this app, I run start_test1
in the top-level urls.py file. On the other app - the one that I run with Docker, which is the one I want to use in the end - start_test1
is started in a Python script, that is itself triggered in a .sh file, which I run via the CMD Docker command.
Upvotes: 8
Views: 10852
Reputation: 451
I have a similar problem but it got solved only by not blocking the worker:
# working:
def worker():
phone_elm = ....
thread = threading.Thread(target=work, args=(phone_elm,))
thread.start()
# not working:
def worker():
phone_elm = ....
work(phone_elm)
scheduler2 = BackgroundScheduler(timezone="Asia/Kolkata")
# schedule scanning running of folder's running file
scheduler2.add_job(worker, 'interval', seconds=15, max_instances=5000)
scheduler1.start()
I mean the not working, is that after 10 ~ time it triggered it was stopped for no reason, started again after 1 of the 10 stopped (work exited)
It is also part of Django, but it is clear that start is not exiting...
Upvotes: 0
Reputation: 157
It appears it was all about where to start the scheduler and to add the job.
In what I did initially (putting the code in a .sh file), the BackgroundScheduler started but the Python script immediately ended after being ran, as it didn't have a blocking behaviour and the sh. file wasn't really part of the app (it's used by the Dockerfile, not by the app).
I ended up finding the solution here: execute-code-when-django-starts-once-only There was no apps.py file inside my application so I created one and followed the instruction in this thread.
It works fine now.
Upvotes: 2