Reputation: 10003
I have a python script written with Flask, which requires some preparation work (connect to databases, acquire some other resources etc) before it can actually accept requests.
I was using it under Apache HTTPD with wsgi. Apache config:
WSGIDaemonProcess test user=<uid> group=<gid> threads=1 processes=4
WSGIScriptAlias /flask/test <path>/flask/test/data.wsgi process-group=test
And it was working fine: Apache would start 4 completely separate processes, each with its own database connection.
I am now trying to switch to uwsgi + nginx. nginx config:
location /flask/test/ {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.sock;
}
uwsgi:
uwsgi -s /tmp/uwsgi.sock --mount /flask/test=test.py --callable app --manage-script-name --processes=4 --master
The simplified script test.py:
from flask import Flask, Response
app = Flask(__name__)
def do_some_preparation():
print("Prepared!")
@app.route("/test")
def get_test():
return Response("test")
do_some_preparation()
if __name__ == "__main__":
app.run()
What I would expect is to see "Prepared!" 4 times in the output. However, uwsgi does not do that, output:
Python main interpreter initialized at 0x71a7b0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 363800 bytes (355 KB) for 4 cores
*** Operational MODE: preforking ***
mounting test.py on /flask/test
Prepared! <======================================
WSGI app 0 (mountpoint='/flask/test') ready in 0 seconds ...
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1212)
spawned uWSGI worker 1 (pid: 1216, cores: 1)
spawned uWSGI worker 2 (pid: 1217, cores: 1)
spawned uWSGI worker 3 (pid: 1218, cores: 1)
spawned uWSGI worker 4 (pid: 1219, cores: 1)
So, in this simplified example, uwsgi spawned 4 workers, but executed do_some_preparation()
only once. In real application, there are several database connections opened, and apparently those are being reused by these 4 processes and cause issues with concurrent request.
Is there a way to tell uwsgi to spawn several completely separate processes?
EDIT: I could, of course, get it working with a workaround like:
from flask import Flask, Response
app = Flask(__name__)
all_prepared = False
def do_some_preparation():
global all_prepared
all_prepared = True
print("Prepared!")
@app.route("/test")
def get_test():
if not all_prepared:
do_some_preparation()
return Response("test")
if __name__ == "__main__":
app.run()
But then I will have to place this "all_prepared" check into every route, which does not seem like a good solution.
Upvotes: 2
Views: 6698
Reputation: 12933
By default uWSGI does preforking. So your app is loaded one time, and then forked.
If you want to load the app one time per worker add --lazy-apps to the uWSGI options.
By the way in both cases you are under true multiprocessing :)
Upvotes: 6
Reputation: 10003
It seems like I have found an answer myself. And the answer is: my code should have been redesigned as:
@app.before_first_request
def do_some_preparation():
...
Then Flask will take care of running do_some_preparation()
function for each worker separately, allowing each one to have its own database connection (or other concurrency-intolerant resource).
Upvotes: 3