Reputation: 15474
I've written "app" that I would like to distribute for clients as a service. Most of the "app" logic is on database side (PostgreSQL) and it's speed/availability depends mostly on database server. App can be exposed as REST API (node.js) that mostly just queries database for some data and then returns it in JSON format.
So there should be separate database instance (server) per each client to make client apps "independent". When customer will purchase a plan then he will "receive" database server (not exposed to him directly) from the cloud (Amazon RDS, heroku PG or other).
The problem is how to expose REST API to each customer. Normally there should be separate web server per each customer that would connect to single database. But this means that I would have to manage N databases and N web servers for each client, additionally scaling each web server (horizontally) for customer needs (to handle more requests per sec).
So my idea is to create single scalable web server (like single heroku app) that will handle requests from all the clients. So flow would look like this:
The problem is that database connections should be pooled on web server instances (to make it faster). When there will be 1000 clients then it will be possible that single web server instance will have x*1000 database connections open where x is the number of parallel requests per customer.
On the other hand if I scale web server, then single customer may hit for example i1 then i2 then i3 then i4 (where iX is web server instance selected by load balancer). It will be not efficient because it will "open" connection pool to same database on 4 different web server instances. If those 4 subsequent requests would hit only i1 then they could take advantage of connection pool that would be "open" during first request (and next requests would use already opened connection).
I have two questions:
Is this architecture a good idea? Maybe there are other standard approaches/services to achieve my goal?
Is it possible to somehow configure load balancer to make it so smart that it could forward some requests from same customer to same web server instance (to take advantage of database connections pool on that instance) while still keeping all requests distributed across all instances as evenly as possible (For example if there would be 10 customers c1, c2, c3, c4 ... and 4 web servers instances w1, w2, w3, w4 and each client makes x requests then it could forward c1|c2 to w1, c3|c4 to w2, c5|c6|c7 to w3 and c8|c9|c10 to w4)? Which load balancer (I know HAPROXY, NGINX and some Amazon Elastic Load Blanacer).
Upvotes: 0
Views: 1792
Reputation: 6499
There is an option for nginx to have a load-balancing to send a client to webserver based on ip: http://nginx.org/en/docs/http/load_balancing.html#nginx_load_balancing_with_ip_hash, so it looks liek what you want for 2).
And there are serveral discussions about database per client at SO:
What are the advantages of using a single database for EACH client?
https://serverfault.com/questions/107731/one-database-vs-multiple-databases
Upvotes: 1