bcollins
bcollins

Reputation: 3459

What are the scaling limits of Dask.distributed?

Are there any anecdotal cases of Dask.distributed deployments with hundreds of worker nodes? Is distributed meant to scale to a cluster of this size?

Upvotes: 7

Views: 988

Answers (1)

MRocklin
MRocklin

Reputation: 57271

Yes

The largest Dask.distributed cluster I've seen is around one thousand nodes. We could theoretically go larger, but not by a huge amount.

The current limit is that the scheduler incurs around a 200 microsecond overhead per task. This translates to about 5000 tasks per second. If each of your tasks take around one second then the scheduler can saturate around 5000 cores.

Historically we ran into other limitations like open file handle limits and such. These have all been cleaned up to the scale that we've seen (1000 nodes) and generally things are fine on Linux or OSX. Dask schedulers on Windows stop scaling in the low hundreds of nodes (though you can use a Linux scheduler with Windows workers). I would not be surprised to see other issues pop up as we scale out to 10k nodes.

In short, you probably don't want to use Dask to replace MPI workloads on your million core Big Iron SuperComputer or at Google Scale. Otherwise you're probably fine.

Upvotes: 10

Related Questions