Reputation: 75
Using job arrays with slurm, I have this sbatch file that runs the same command 10 times on different input files :
File Edit Options Buffers Tools Sh-Script Help
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --error jobs/test.%A_%a.error
#SBATCH --partition=vrt-cpu
#SBATCH --time=01:00:00
#SBATCH --mem=60000
#SBATCH --cpus-per-task 4
#SBATCH --array=1-10
OMP_NUM_THREADS=$SLURM_JOB_CPUS_PER_NODE
export OMP_NUM_THREADS
time srun $(head -n ${SLURM_ARRAY_TASK_ID} jobs/jobarray.input | tail -n 1)
The input file jobs/jobarray.input contains a series of commands like this one:
/home/fwt/CarTest /home/fwt/hummol/params.conf >& /home/fwt/hummol/test.log
I want the log file to be written as above (using ">& test.log") instead of using the usual #SBATCH --output test.%A_%a.out directive, but it does not work, i.e. no log file is written whereas the job runs correctly.
The weird thing is that if run one single job without using the job array, it writes the log file correctly.
Does anyone know what is wrong here please ?
Many thanks.
Upvotes: 1
Views: 1821
Reputation: 59320
Every job writes to the same file and does so with a Bash redirection that starts with truncating the file. So as soon as a job in the array starts, the file is emptied. You should append to the log file rather than simple redirection (note the >>&
instead of >&
);
/home/fwt/CarTest /home/fwt/hummol/params.conf >>& /home/fwt/hummol/test.log
Upvotes: 2