Reputation: 13
I've currently been delving into the IConfigurationRefresher
functionality, and the SetDirty()
and RefreshAsync()
methods that it provides.
This is the main code that I'm using, which is currently refreshing the cache of the FeatureManagement package when a certain API endpoint is called:
Log.Logger.Information($"Refreshing app config.");
_refresher.SetDirty(TimeSpan.FromSeconds(0));
await _refresher.RefreshAsync();
This works great when working with a single hosted instance of a service/application, but how would this functionality work when there are multiple applications available via a loadbalancer in a web farm, or within a Kubernetes cluster with multiple pods?
Another question I have is if the caching is held locally in memory within the consuming service or is it distributed and cached in app config itself?
Scenario:
If a request to Application1 occurs, and the internal cache is refreshed. What would happen if any configuration/feature flags are retrieved from Azure App Configuration via the SDK? Would some calls have the updated values, due to the cache refresh, whilst the calls to the second application containing out of date values?
Upvotes: 1
Views: 204
Reputation: 730
First I'll discuss the concern of multiple applications behind a load balancer. This has to be solved by using an event model that allows events to be published to all the backend applications. This can be done with service bus, take a look at the doc that Azure App Configuration has for their configuration refresh push model.
Secondly, the concern of caching. The "cache" here that you are invalidating with the SetDirty
call is in-memory only.
I believe the answer of the in-memory cache only question also solves the last question. Each application will have the values corresponding to the last time they called SetDirty
and performed a refresh. Each time that sequence of actions is performed that caller will get the latest values from App Configuration.
Please let me know if anything is unclear.
Upvotes: 2