Reputation: 2654
According to this article, the app-engine front-end and the playground back-end communicate through RPC calls. Each one of app-engine front-end instance and playground instance can be created to support scaling.
I am asking myself what is/are the patterns (solutions) to load balance works between front-end request and back-end instance while keeping RPC.
One solution may be to use one global working queue where tasks are puts inside it with a 'Reply-To' header. This header should point to a per front-end instance queue where responses are put. Something like the following schema (from RabbitMQ tutorial) with rpc_queue shared between back-end instances :
I am not sure this would be a good way to do especially the fact that if the shared queue is offline, the whole system fail (but how to take care of this?).
Thank you.
Upvotes: 6
Views: 846
Reputation: 2654
As an answer and a follow-up of comments I received on the first post, I developed Indenter, a small proof of concept based on the idea proposed of a service discovery daemon (I use etcd instead of ZooKeepr for simplicity however).
I wrote an article about it and release the code if someone may be interested one day:
Upvotes: 1