Pragmatic
Pragmatic

Reputation: 3127

Scalable Request Response pattern using Azure Service Bus

We are evaluating "Azure service bus" to use between web server and app server for request response pattern. We are planning to have two queues:

Request Queue

Response Queue

Webserver will push a message to request queue and subscribe to response queue. By comparing the MessageID and CorrelationId, it can receive the response back, which can be sent back to browser.

But over cloud, using elastic scaling, we can increase/decrease web server (and app server) instances. We are wondering if this pattern will work here optimally.

To make this work, we will have to have one Request queue and multiple topics (one for each web server instance).

This will have two down sides:

Along with increasing/decreasing web server instance, we will have to create/delete topic as well.

All the message will be pushed to all the topics. So, every message will be processed by all the web servers. And this is not an efficient way.

Please share your thoughts.

Thanks In Advance

Upvotes: 1

Views: 1540

Answers (2)

Akash Thambiran
Akash Thambiran

Reputation: 41

I know this is old, but thought I should comment to complete this thread.

I agree with Sean. In principle, Do not design with instance affinity in mind. Any design should work irrespective of number of instances and whichever instance runs the code. Microsoft does recommend the same when designing application architecture for running in the cloud.

In your case, I do not think you should plan to have one topic for each instance. You should just put the request messages into one topic, with a subscription to allow your receiving app service to process those request messages. When your receiving app service scales out, that's where your design needs to allow reading messages from the subscription from multiple receivers (multiple instances), which is described in the Competing consumers pattern. https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers

Please post what you have finally implemented.

Upvotes: 0

Sean Feldman
Sean Feldman

Reputation: 26057

When you scale out your endpoint, you don't want to have an instance affinity. You want to rely on the competing consumers and not care which instance of your endpoint processes messages.

For example, if you receive a response and write that to a database, most likely you don't care what instance of an endpoint has written the data. But if you have some in-memory state or anything other info only available to the endpoint that has originated the request and processing reply messages requires that information, then you have instance affinity and need to either remove it or use technology that allows to address that. For example, something like a SignalR with a backplane to communicate a reply message to all your web endpoint instances.

Note that ideally you should avoid instance affinity as much as you can.

Upvotes: 1

Related Questions