Reputation: 1015
I created Flask
WSGI-application which uses gunicorn
as WSGI-server, for DB it uses PostgreSQL
through Flask SQLAlchemy
extension. That's all hosted on Heroku
.
gunicorn configuration
Heroku PostgreSQL configuration
For everything else default configuration is used.
I'm getting this error: sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: too many connections for role <id>
. Obviously, i'm exceeded allowed number of DB connections.
I tried these things to fix it:
poolclass
to NullPool
;pool_recycle
to 2. Number of connections still the same even after more than 2 seconds;session.close()
with engine.dispose()
;max_overflow = 0
, SQLALchmey pool_size = 10
(i'm getting this error: sqlalchemy.exc.TimeoutError: QueuePool limit of size 10 overflow 0 reached, connection timed out, timeout 30
).Nothing of this works. I'm still getting this error even with minimum gunicorn configuration (1 worker with 18 connections). I'm really started not to understand what is really going on.
I thought it worked like this: each worker have it's own instance of engine, and each engine have it's own pool size. So, if there is 2 workers with engine default config (pool size is 5), then we have 2 * 5 = 10 maximum connections to DB. But it looks like this is really not like this.
Questions
SQLAlchemy
pooling works with gevent
workers? i.e., how can i count maximum number of DB connections?Sorry for too much questions, but it is really frustrating me.
Upvotes: 15
Views: 3476
Reputation: 637
i have faced the same problem. but when i set preload_app=False
in gunicorn file , then it gets fixed.
Upvotes: 1
Reputation: 1269
I've been struggling with the same problem while building this project and the only way I was eventually able to address this issue was by caching MockConnection
/database_engine
objects returned by SQLAlchemy's create_engine
method into a global variable and reusing the same MockConnection
object if it has already been created.
Upvotes: 1