Reputation: 461
When I query a Flask-SQLAlchemy model in a Thread, it seems like the database connection is never closed. Here is a test case:
from threading import Thread
from sqlalchemy.pool import NullPool
from flask_sqlalchemy import SQLAlchemy
from flask import Flask
db = SQLAlchemy()
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql:///testdb'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {
'poolclass': NullPool,
}
db.app = app
db.init_app(app)
class TestModel(db.Model):
__tablename__ = "test_table"
id = db.Column(db.Integer, autoincrement=True, primary_key=True)
@app.route('/test-connection')
def test_connection():
def run_test():
models = TestModel.query.all()
print(models)
thread = Thread(target=run_test)
thread.start()
return "OK"
app.run(debug=True, host='0.0.0.0')
When I run my route, it permanently leaves an idle in transaction
query in my pg_stat_activity table:
testdb=# select query, state from pg_stat_activity where query like '%test_table%' and query not like '%pg_stat_activity%';
query | state
----------------------------------------+---------------------
SELECT test_table.id AS test_table_id +| idle in transaction
FROM test_table |
(1 row)
That row doesn't show up if I call the run_test
function synchronously instead of in the Thread. In production, this is causing my application to run out of database connections and crash. How can I close my Flask-SQLAlchemy database connection after using it in a thread?
I'm using Python 3.8.6, SQLAlchemy 1.3.18, and Flask-SQLAlchemy 2.4.4.
Upvotes: 9
Views: 12527
Reputation: 13301
Use with statement, there is a test:
def test():
with db.session() as dbss:
qss = TestModel.query.session
assert dbss == qss
with db.session():
TestModel.query.filter(...)...
Upvotes: 0
Reputation: 566
We had faced a similar issue with open connections. Though the DB in question was SQL Server, I believe the same solution should work here.
We added a method to close the DB connection.
def cleanup(session):
"""
This method cleans up the session object and closes the connection pool using the dispose
method.
"""
session.close()
engine_container.dispose()
The engine_container is defined as:
engine_container = db.get_engine(app)
We called this method from the finally block after every request.
finally:
cleanup(db.session)
With this change, your code should look like:
from threading import Thread
from sqlalchemy.pool import NullPool
from flask_sqlalchemy import SQLAlchemy
from flask import Flask
db = SQLAlchemy()
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql:///testdb'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {
'poolclass': NullPool,
}
db.app = app
db.init_app(app)
engine_container = db.get_engine(app)
class TestModel(db.Model):
__tablename__ = "test_table"
id = db.Column(db.Integer, autoincrement=True, primary_key=True)
@app.route('/test-connection')
def test_connection():
def run_test():
try:
models = TestModel.query.all()
print(models)
except Exception as err:
raise err
finally:
cleanup(db.session)
thread = Thread(target=run_test)
thread.start()
return "OK"
def cleanup(session):
"""
This method cleans up the session object and also closes the connection pool using the dispose method.
"""
session.close()
engine_container.dispose()
app.run(debug=True, host='0.0.0.0')
Upvotes: 13
Reputation: 372
The issue you're experiencing has to do with the connection pooling that is going on behind the scenes. Good resources to read might be this SQLAlchemy resource on setting the pool recycle parameter, and this SQLAlchemy resource on the reset on return behavior.
SQLALCHEMY_ENGINE_OPTIONS = {
pool_size=20,
pool_reset_on_return='commit', # looks like postgres likes this more than rollback
pool_timeout=5, # try a low value here maybe
}
It also looks like there is some useful information here. It might be useful to create a custom function that essentially closes the session connection following a successful response.
Unfortunately, I don't have much experience with postgres myself, and had to tinker my way to the promised-land with MySQL connection pooling. I'm imagine the answer will actually be found somewhere in the Postgres Documentation, like it was in the MySQL Documentation for myself.
Upvotes: 3