user3904090
user3904090

Reputation:

R jobs on SLURM running on a single node only

Despite mentioning the job name, partition and node on which the job should run, R is still running on compute node 01 with no migration to other nodes. I am presenting the script below, any help is appreciated:

!/bin/bash
#SBATCH --job-name=10/0.30
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --partition=debug
#SBATCH --exclude=compute[23,31-33,40]
#SBATCH --nodelist=compute[07]

echo "program started"  

cd /home/qwe/10/0.30

sbatch /home/R-3.3.1/bin/R CMD BATCH --no-save --no-restore test_dcd.R test_dcd.out 

On running squeue to get the list of running jobs:

         12169      qwe        R      qwe  R       7:08      1 compute01
         12172      qwe        R      qwe  R       5:03      1 compute01
         12175      qwe        R      qwe  R       3:26      1 compute01
         12177      qwe        R      qwe  R       0:02      1 compute01

Upvotes: 0

Views: 1822

Answers (1)

Carles Fenoy
Carles Fenoy

Reputation: 5337

You have to run the sbatch passing the script as a parameter, not inside the script.

So instead of running:

sbatch /home1/ASP/R-3.3.1/bin/R...

you should run:

sbatch myscript.sh

Also if you want to use multiple cpus in a job, you should use --cpus-per-task=16 instead of the --ntasks-per-node. --ntasks and --ntasks-per-node are used for MPI applications. For more details about the options check the sbatch manpage.

Upvotes: 3

Related Questions