Reputation: 5907
I can't seem to get my Flask app to close or reuse DB connections. I'm using PostgreSQL 9.1.3 and
Flask==0.8
Flask-SQLAlchemy==0.16
psycopg2==2.4.5
As my test suite runs the number of open connections climbs until it hits 20 (the max_connections
setting in postgresql.conf
), then I see:
OperationalError: (OperationalError) FATAL: sorry, too many clients already
None None
I've reduced the code to the point where it's just calling create_all
and drop_all
(but not issuing any sql as there are no models).
I see connections being checked in and out in the logs:
DEBUG:sqlalchemy.pool.QueuePool:Connection <connection object at 0x101c1dff0; dsn: 'dbname=cx_test host=localhost', closed: 0> checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Connection <connection object at 0x101c1dff0; dsn: 'dbname=cx_test host=localhost', closed: 0> being returned to pool
WARNING:root:impl <-------- That's the test running
DEBUG:sqlalchemy.pool.QueuePool:Connection <connection object at 0x101c1dff0; dsn: 'dbname=cx_test host=localhost', closed: 0> checked out from pool
DEBUG:sqlalchemy.pool.QueuePool:Connection <connection object at 0x101c1dff0; dsn: 'dbname=cx_test host=localhost', closed: 0> being returned to pool
For each test run the address of the connection (the "connection object at xyz" part) is different. I suspect this has something to do with the problem, but I'm not sure how to investigate further.
The code below reproduces the problem in a new venv:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from unittest import TestCase
import logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.dialects').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG)
db = SQLAlchemy()
def create_app(config=None):
app = Flask(__name__)
app.config.from_object(config)
db.init_app(app)
return app
class AppTestCase(TestCase):
SQLALCHEMY_DATABASE_URI = "postgresql://localhost/cx_test"
TESTING = True
def create_app(self):
return create_app(self)
def setUp(self):
self.app = self.create_app()
self.client = self.app.test_client()
self._ctx = self.app.test_request_context()
self._ctx.push()
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
self._ctx.pop()
class TestModel(AppTestCase):
def impl(self):
logging.warn("impl")
pass
def test_01(self):
self.impl()
def test_02(self):
self.impl()
def test_03(self):
self.impl()
def test_04(self):
self.impl()
def test_05(self):
self.impl()
def test_06(self):
self.impl()
def test_07(self):
self.impl()
def test_08(self):
self.impl()
def test_09(self):
self.impl()
def test_10(self):
self.impl()
def test_11(self):
self.impl()
def test_12(self):
self.impl()
def test_13(self):
self.impl()
def test_14(self):
self.impl()
def test_15(self):
self.impl()
def test_16(self):
self.impl()
def test_17(self):
self.impl()
def test_18(self):
self.impl()
def test_19(self):
self.impl()
if __name__ == "__main__":
import unittest
unittest.main()
This is the first time I've used app factories in flask, and I copied this code partly from the Flask-SQLAlchemy docs. Elseware those docs mention that using a db in the wrong context will cause connections to leak - maybe I am doing the init incorrectly?
Upvotes: 22
Views: 9982
Reputation: 17246
EDIT: SQLALCHEMY_COMMIT_ON_TEARDOWN was deprecated in Flask-SQLAlchemy in version 2.4.3. You can see the change note here where they recommend calling .commit()
on your own:
What I do is register my own app.after_request
which calls .commit()
if the response status code is < 400. That requires that you properly structure your application to make sure HTTP transactions with response status codes < 400 should be committed to the database but I think it is a good design principle.
---- OLD OUTDATED ANSWER BELOW ----
In the most recent versions of Flask-SQLAlchemy, session.remove()
is automatically called in app.after_request
.
Also, see the SQLALCHEMY_COMMIT_ON_TEARDOWN
setting here:
https://pythonhosted.org/Flask-SQLAlchemy/config.html?highlight=sqlalchemy_commit_on_teardown
That will automatically commit the transaction also.
Upvotes: 2
Reputation: 2454
After reading the SQLAlchemy docs and some fiddling with the db instance, I finally got the solution. Add db.get_engine(self.app).dispose()
in tearDown()
so that it looks like:
def tearDown(self):
db.session.remove()
db.drop_all()
db.get_engine(self.app).dispose()
self._ctx.pop()
Upvotes: 11
Reputation: 1085
Since the questions was asked about an year ago, I figure the OP must've resolved his issues. But for whoever wandered to here (like me) trying to figure out what's going, here's my best explanation:
As van said, the problem is indeed with the test case calling setUp
and tearDown
for each test. Though the connection is not exactly leaking from SQLAlchemy, but instead it is because of that each test has its own setUp
, multiple instances of the app is created: each app will have its own database connection pool, which presumably isn't reused or recycled when the test finishes.
In other words, the connection is being checked out and returned to the pool correctly, but the connection then live on as an idle connection for future transactions within the same app (the point of connection pooling).
In the test case above, around 20 connection pools (each with an idle connection because of the create/drop_all) is created and occupying postgres connection limit.
Upvotes: 10
Reputation: 76992
You know that the setUp and tearDown
are called before and after every test method
. From your code it looks like you need them in order to ensure empty database.
However, there are also setUpClass and tearDownClass
, which are called once for each test class.
I believe you can split the code that you currently have and move the db-connection
related part to the Class
-level, while keeping the test-method
related part where it needs to be.
Upvotes: 0