irritable_phd_syndrome
irritable_phd_syndrome

Reputation: 5077

SLURM : Shell commands cause following #SBATCH commands to not get parsed

I am new to SLURM and using it with OpenMP. I created a C program, main.c:

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
#include <time.h>

void wait(int s)
{
    int waittime = time(0) + s;
    while(time(0) < waittime);
}


int main(void)
{
    int id=-1;
    int nthreads = 0;

    #pragma omp parallel \
    private(id)
    {   
        nthreads = omp_get_num_threads();
        id = omp_get_thread_num();
        printf("Hello from thread = %i\n",id);

        if(id == 0)
            printf("nthreads = %i\n", nthreads);

        //Let's wait
        wait(60);
    }   

    return 0;
}

and a slurm batch script, slurm.sh:

#!/bin/bash
#SBATCH --cpus-per-task=10
#SBATCH --job-name=OpenMP
#SBATCH --output output.txt
echo "Hello"

#SBATCH --mem-per-cpu=100  

export OMP_NUM_THREADS=10

./a.out

If I submit (i.e. sbatch slurm.sh), this SLURM happily allocates 10 cpus for my job. If I put the echo "Hello" before #SBATCH --cpus-per-task=10, I only get allocated 1 CPU. What is going on here? I do not understand why, the location of a shell command in my batch script changes how many cpus get allocated.

::Edit:: Upon further inspection it appears that any shell command (e.g. date, echo, set) seems to cause sbatch to ignore all following #SBATCH commands. For instance in my batch script, I can set #SBATCH --mem-per-cpu=1000000 and it will happily run on a 128GB machine. If I move #SBATCH --mem-per-cpu=1000000 to a line before the echo, SLURM appropriately gives me an error.

Upvotes: 2

Views: 1028

Answers (1)

Carles Fenoy
Carles Fenoy

Reputation: 5377

You can not add any command between #SBATCH directives.

From the sbatch man page:

The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

Upvotes: 3

Related Questions