Reputation: 1022
I'm running approximately 400 files through a SLURM pipeline with sbatch. When I queue a task with sbatch ./myscript.sh file_x
all the files get queued to the same node.
I've tried a variation of the #SBATCH
parameters at the beginning of sbatch script and to no luck. Here's what I've tried so far:
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 60
#SBATCH -o slurm_out/output_%j.txt
#SBATCH -e slurm_error/error_%j.txt
and
#!/bin/bash
#SBATCH -n 60
#SBATCH -o slurm_out/output_%j.txt
#SBATCH -e slurm_error/error_%j.txt
and
#!/bin/bash
#SBATCH -N 1
#SBATCH -o slurm_out/output_%j.txt
#SBATCH -e slurm_error/error_%j.txt
and
#!/bin/bash
#SBATCH -o slurm_out/output_%j.txt
#SBATCH -e slurm_error/error_%j.txt
The slurm_out files are being created and written to so SBATCH is definitely picking up the parameters.
regarding the -n option, the docs say that the default is "one task per node" however, that seems not to be the case:
-n, --ntasks= sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the Slurm controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.
What parameters will get a single task per node?
Upvotes: 2
Views: 3178
Reputation: 59072
You can simply try with --ntasks-per-node=1
. The default of "one task per node" applies when the number of tasks is not specified by the number of node is. In such case Slurm will assume that it must spawn as many tasks as the number of nodes requested. Which still does not mean that each task will be assigned a distinct node, it depends on how you start the computations in the submission script.
If you furthermore need no other jobs than yours on the node, add the --exclusive
parameter.
Upvotes: 2