Reputation: 467
If I restart my jupyter kernel will any existing LocalCluster shutdown or will the dask worker processes keep running?
I know when I used a SLURM Cluster the processes keep running if I restart my kernel without calling cluster.close()
and I have to use squeue
to see them and scancel
to cancel them.
For local processes however how can I tell that all the worker processes are gone after I have restarted my kernel. If they do not disappear automatically how can I manually shut them down if I no longer have access to cluster
(the kernel restarted)
I try to remember to call cluster.close
but I often forget. Using a context manager doesn't work for my jupyter needs.
Upvotes: 1
Views: 614
Reputation: 28673
During the normal termination of your kernel python process, all objects will be finalised. For the cluster object, this includes calling close()
automatically, and you don't normally need to worry about it.
It is perhaps possible that close
does not have a chance to run, in the case that the kernel is more forcibly killed as opposed to a normal termination. Since all LocalCluster processes are children of the kernel that started then, this will still result in the cluster stopping, but perhaps with some warnings about connections that didn't have time to clean themselves up. You should be able to ignore such warnings.
Upvotes: 1