Despicable me
Despicable me

Reputation: 700

Azure Functions concurrency: maxConcurrentRequests - is it Truly parallel to give a simultaneous execution of all requests happening at the same time

Here in this thread https://stackoverflow.com/a/66163971/6514559 it is explained that

  1. If Azure decides that your App needs to scale and creates a new host, and say there are two hosts, then values of these params (maxConcurrentRequests ,FUNCTIONS_WORKER_PROCESS_COUNT) are applied per host not across host.
  2. If your App has multiple Functions then maxConcurrentRequests applies to-all/across Functions within this host, not per Function.

The questions are,

since each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU (Reference), how can it run parallel loads with one CPU? On a different thought this does say ACU per instance is 100 for Consumption Plan

Upvotes: 1

Views: 2081

Answers (1)

Kashyap
Kashyap

Reputation: 17466

See this, this and this. And OP already read it, but this also for completeness.

Is it possible to have more than one function app on a single host

Documentation is very confusing. AFAIK:

  • On consumption plan, no.
  • On premium/app-service plan, there is a hint that might mean relation is one host to many apps, but IMO it's debatable.

(Is this what is controlled by FUNCTIONS_WORKER_PROCESS_COUNT?)

NO.

Need to understand the terms:

  • Function App: One Function App. Top level Azure Resource. A logical collection of Functions.
  • Function: One Function with in/out-trigger/binding(s). One Function App contains one or more Functions.
  • Function Host: Virtual/physical host where Function App runs as Linux/Windows process.
  • Worker Process: One process (one pid) running on a Function Host.
    • One Worker Process hosts all Functions of one Function App.
    • One Host will have FUNCTIONS_WORKER_PROCESS_COUNT (default 1) Worker Processes running on it, sharing all resources (RAM, CPU, ..)

maxConcurrentRequests = 100 does this really means that all 100 requests will be processed in parallel (simultaneously) by a single host (Consumption plan , 1 CPU,1.5GB Host ) .

Discounting cold start problems, execution would be in parallel within limits of selected plan.

This thread here suspects everything is executed in series?!

I'm sure there is an explanation. There is unambiguous documentation that says requests do get executed in parallel, within limits.

Upvotes: 2

Related Questions