Reputation: 1308
I have to update prices of thousands of SKUs calling a third party API hosted in AWS. The third party party has a TPS throttle of 1000, i.e., 1000 API calls are permitted per second. The third party API approximately takes 1.5 seconds for every API invoke.
Now every time if I update the prices sequentially by invoking third party API, for 2,000 products, the price update is taking 2000 * 1.5 = 3000 seconds. By using threading & thread synchronization this should have been achieved in 3 seconds since the TPS throttle is 1000. Here is sample code snippet of my present method:
[HttpPut]
public async Task<HttpResponseMessage> UpdatePrices([FromBody] List<PriceViewModel> prices)
{
int failedAPICalls = 0;
int successAPICalls = 0;
foreach (var p in prices) {
PriceManagement pm = new PriceManagement();
ErrorViewModel error = await pm.UpdateMarketplacePrice(p.SKU, p.Price);
if (error != null) {
failedAPICalls++;
//Error handling code here...
}
else {
successAPICalls++;
//Log price update to database
}
}
var totalApiCalls = successAPICalls + failedAPICalls;
string message = "Total API calls : " + totalApiCalls.ToString() + " | Successfull API Calls: " + successAPICalls.ToString()
+ " | Failed API Calls: " + failedAPICalls.ToString();
return Request.CreateResponse(HttpStatusCode.OK, message);
}
Here is the sample definition View models:
public class PriceViewModel
{
public string SKU { get; set; }
public decimal Price { get; set; }
}
public class ErrorViewModel
{
public int StatusCode { get; set; }
public string Description { get; set; }
}
Please help me out to improve performance.
Upvotes: 0
Views: 47
Reputation: 131364
The code you posted is sequential. Asynchronous but still sequential. await
will await an already asynchronous operation to complete without blocking, before continuing execution. It won't fire off all requests at the same time.
One easy way to make multiple concurrent calls with a specific limit is to use an ActionBlock<> with a MaxDegreeOfParallelism
set to the limit you want, eg :
var options=new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = maxDegreeOfParallelism,
BoundedCapacity = capacity
};
var block=new ActionBlock<PriceViewModel>(async p=>{
var pm = new PriceManagement();
var error = await pm.UpdateMarketplacePrice(p.SKU, p.Price);
if (error != null) {
Interlocked.Increment(ref failedAPICalls);
}
else {
Interlocked.Increment(ref successAPICalls);
}
}, options);
Setting MaxDegreeOfParallelism
controls how many messages can be processed concurrently. The rest of the messages are buffered.
Once the block is created, we can post messages to it. Each message will be processed by a separate task up to the MaxDOP limit. Once we're done, we tell the block so and wait for it to complete all remaining messages.
foreach(var p in prices)
{
await block.SendAsync(p);
}
//Tell the block we're done
block.Complete();
//Wait until all prices are processed
await block.Completion;
By default, there's no limit to how many items can be buffered. This can be a problem if operations are slow, as the buffer may end up with thousands of items waiting to be processed, essentially duplicating the prices list.
To avoid this, BoundedCapacity
can be set to a specific number. When the limit is reached SendAsync
will block until a slot becomes available
Upvotes: 1