mabn
mabn

Reputation: 2523

distributed pool with limited size

There's a system which accesses various not-so-efficient services. The services are called as a result of processing some message. Because of this inefficiency the system is limited to 1 message consumer per node - to avoid overloading a particular service "A". Processed messages vary and it could happen that multiple messages are processed at the same time all requiring calling service "A" - hence the limit. Let's say that service "A" can handle 3 concurrent connections, the system has 3 nodes, so the highest allowed number of consumers per node is 1.

Other services also have their capacity, varying from said 3 to basically unlimited.

The question is - what's the best way to introduce such limits? If there was a single node it would be easy to just introduce a pool of service clients. Sure it would block the message consumer until the client becomes available, but one could live with it. But it also means pools of size 1 per node (because all 3 nodes could start calling service "A"). For multiple nodes some kind of distributed client pool would be required. Is there anything like that?

(I know that if single message processing was split into smaller messages, 1 per service call, it would allow to just do that using e.g. JMS Queue, but it's not doable e.g. due to transactional nature of message processing).

Upvotes: 1

Views: 644

Answers (1)

Sybeus
Sybeus

Reputation: 1189

I see two possible solutions. Both are based on a middle-ware approach to the problem, where the simple solution solves your specific problem, and the more advanced solution requires greater investment for greater flexibility and additional benefits.

The Simple Proxy Solution

Create your own middle-ware proxy in front of the not-so-efficient service. The proxy would maintain the limited connection pool to the back-end service. Then simply block or reject (instead of queue) when outbound connections to the back-end service are choked. And otherwise just forward the request from the inbound system node to the back-end service, then respond to the system node with the service's response. This way the back-end service is never overloaded. And it allows for the synchronous communication that your nodes require because of their transactional nature.

The System Architect Solution

Use a full-featured ESB (Enterprise Service Bus). One that allows you to set limits on concurrent connections to specific end points and allows both asynchronous & synchronous message processing. Then the ESB becomes your environment-wide traffic controller which can be configured to block or reject messages when a not-so-efficient end point becomes choked. For additional benefits look for an ESB that allows for quality of service configurations, to prevent starvation of system nodes when using limited resources, or automatically retry connection attempts to flaky end points.

Upvotes: 3

Related Questions