Reputation: 8354
I have an application running within the Cloud Run infrastructure. It was built with FastAPI and SQLAlchemy. The database is a Postgres, managed by Cloud SQL. At a certain point, the database alerts that the maximum number of connections has been exceeded. My suspicion is that I'm using a connection pool of size 5, and when the container in production eventually dies due to inactivity, these connections are not returned to the pool. This continues until all connections are consumed.
I understand that Cloud Run is ephemeral; the container can be terminated due to inactivity, so it doesn't seem sensible to use a connection pool without another stateful layer to hold these connections. What seems more correct to me would be to use a PGBouncer or PGPool in front of the database. I even looked to see if there is any managed service from Google for this, but I didn't find any.
In summary, what would be the correct way to manage this connection pool with the database in an infrastructure using Cloud Run and Cloud SQL?
Upvotes: 2
Views: 806
Reputation: 462
Sharing this as a community wiki for the benefit of others
As mentioned by @John Hanley
The container instance will receive a signal that the container is being terminated. Close the open connections upon receipt of SIGTERM. cloud.google.com/run/docs/samples/cloudrun-sigterm-handler
Upvotes: 3