wspeirs
wspeirs

Reputation: 1413

Single apscheduler instance in Flask application

Setup:

I'm trying to prevent apscheduler from running the same job multiple times by preventing multiple instances of apscheduler from starting. Currently I'm using the following code to ensure the scheduler is only started once:

    if 'SCHEDULER' not in app.config or app.config['SCHEDULER'] is None:
        logger.info("Configuring scheduler")
        app.config['SCHEDULER'] = scheduler.configure()

However, when I look at my logs, I see the scheduler being started twice:

[07:07:56.796001 pid 24778 INFO] main.py 57:Configuring scheduler
[07:07:56.807977 pid 24778 INFO] base.py 132:Scheduler started
[07:07:56.812253 pid 24778 DEBUG] base.py 795:Looking for jobs to run
[07:07:56.818019 pid 24778 DEBUG] base.py 840:Next wakeup is due at-10-14 11:30:00+00:00 (in 1323.187678 seconds)
[07:07:57.919869 pid 24777 INFO] main.py 57:Configuring scheduler
[07:07:57.930654 pid 24777 INFO] base.py 132:Scheduler started
[07:07:57.935212 pid 24777 DEBUG] base.py 795:Looking for jobs to run
[07:07:57.939795 pid 24777 DEBUG] base.py 840:Next wakeup is due at-10-14 11:30:00+00:00 (in 1322.064753 seconds)

As can be seen by the pid, there are two processes that are being started somewhere/somehow. How can I prevent this? Where is this configuration in httpd?

Say I did want two processes running, I could use flock to prevent apscheduler from starting twice. However, this won't work because the process that does NOT start apscheduler won't be able to add/remove jobs because app.config['SCHEDULER'] set for that process to use.

What is the best way to configure/setup a Flask web app with multiple processes that can add/remove jobs, and yet prevent the scheduler from running the job multiple times?

Upvotes: 5

Views: 1791

Answers (1)

wspeirs
wspeirs

Reputation: 1413

I finally settled on using a file-based lock to ensure that the task doesn't run twice:

def get_lock(name):
    fd = open('/tmp/' + name, 'w')

    try:
        flock(fd, LOCK_EX | LOCK_NB)  # open for exclusive locking
        return fd
    except IOError as e:
        logger.warn('Could not get the lock for ' + str(name))
        fd.close()
        return None


def release_lock(fd):
    sleep(2)  # extend the time a bit longer in the hopes that it blocks the other proc
    flock(fd, LOCK_UN)
    fd.close()

It's a bit of a hack, but seems to be working...

Upvotes: 5

Related Questions