yasir
yasir

Reputation: 123

Job distribution between nodes on HPC, instead of 1 CPU cores

I am using PBS, HPC to submit serially written C codes. I have to run suppose 5 codes in 5 different directories. when I select 1 node and 5 cores select=1:ncpus=5, and submits it with ./submit &. It forks and runs all the 5 jobs. The moment I choose 5 node and 1 cores select=5:ncpus=1, and submits it with ./submit &. Only 1 core of the first node runs all five jobs and rest 4 threads are free, speed decreased to 1/5.

My question is, Is it possible to fork the job between the nodes as well? because when I select on HPC select=1:ncpus=24 it gets to Que instead select=4:ncpus=6 runs. Thanks.

Upvotes: 1

Views: 307

Answers (1)

Katia
Katia

Reputation: 3914

You should consider using job arrays (using option #PBS -t 1-5) with I node and 1 cpu each. Then 5 independent jobs will start and your job will wait less in the queue. Within your script you can use environment variable PBS_ARRAYID to identify the task and use it to set appropriate directory and start the appropriate C code. Something like this:

#!/bin/bash -l
#PBS -N yourjobname
#PBS -q yourqueue
#PBS -l nodes=1:ppn=1
#PBS -t 1-5

./myprog-${PBS_ARRAYID}.c

This script will run 5 jobs and each of them will run programs with a name myprog-*.c where * is a number between 1 and 5.

Upvotes: 1

Related Questions