Reputation: 18339
I am currently evaluating the use of a service bus and azure function to trigger some work that needs to be done via a downstream api call. It is all pretty standard except that I don't have a good handle on what happens when the downstream system is overloaded and/or is returning headers to throttle (ie, max # of calls per minute/etc). We don't seem to have any dynamic control over forced throttling of the queue trigger.
I know that we can manually set the max concurrency but that doesn't necessarily solve the issue in that we have no control of the down stream system and need to consider it could be offline or slow at any time.
Also, we can create the messages so they are scheduled to flow in at a certain rate but again the downstream system can still saturate or return rate limits.
Option 1:
Assuming the consumption plan, from a solution standpoint this was one way I could think of:
Basically queue 1 becomes controller for queue2 and queue3 when the downstream system is saturated.
Option 2:
We can clone the message and requeue it in the future and keep doing this until they are all processes. Keeps one queue and just requeueing until we get it processed.
Option 3:
Assuming we have our own app plan that is dedicated instead of consumption, I guess we can Thread.Sleep
the functions if they are getting close to being rate limited or down and keep retrying. This could probably be a calculation of max concurrency, instances and rate limits. I wouldn't consider this on the consumption plan as the sleeps could increase costs dramatically.
I wonder if I'm missing something simple in the best way to handle downstream saturation or to throttle the queue trigger (could be service bus or storage queue)
Edit: I would like to add that I pumped 1 million messages into a service bus queue that scheduled at the send time. I watched the Azure Function (consumption plan) scale to hit about 1500/second to provide a good metric. I'm not sure how the dedicated ones will perform yet.
It looks like the host file can be modified on the fly and the settings take effect immediately. Although this is for all the functions it may work in my particular case (update the settings and check again every minute or so depending on the rate limit).
This looks like it might be a step in the right direction when it gets implemented by the Function team, but even so, we still need a way to manage downstream errors.
Upvotes: 5
Views: 4298
Reputation: 1847
Just so, can you ask your downstream guys to offer a queue endpoint in addition to the normal api endpoint? Then you can flood their queue and they can process it at their leisure.
Upvotes: 0
Reputation: 12538
Unfortunately, backpresssure features like what you need are not currently available with the Service Bus trigger (and, more generally, Azure Functions), so you would need to handle that yourself.
The approach you've described would work, you would just need to handle that in different Function Apps, as the Service Bus settings apply to the entire app and not just a function.
Might not be a huge help in your scenario, but for the apps handling queue 2 or 3, you can also try using a preview flag that will limit the number of scaled out instances of your app: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
. Note that this is in preview and there are no guarantees yet.
I would encourage you to open an issue on the functions repo documenting your scenario. These kinds of issues are definitely something we want to address and the more feedback we have, the better.
Upvotes: 2