Reputation: 41
I'm new to distributed operating system. And I need to train about multiple machine learning models with supercomputers. I need to run the same training script multiple times, and for each run passing the script with a different command line argument. Can I achieve this by using mpiexec so that I can train multiple models in parallel with different inputs?
I found Single Program Multiple data model of mpi, but I don't know the corresponding commands.
I want to run the following line in parallel among the computation nodes in the cluster.
python train.py arg > log.out # arg is the argument that differs for each node
But, if I'm using:
mpiexec train.py arg >log.out
it would only run train.py with the same command line argument: arg for multiple times in parallel.
Can someone point out the right way to do it? Thank you!
Upvotes: 0
Views: 3069
Reputation: 41
Thanks for all the comments and answers. Here is what I did to get to my final solution:
At first, I had a bash script to submit the job to the cluster as an array of jobs, with $PBS_ARRAYID to pass different command line argument to each job:
#PBS -N ondemand/sys/myjobs/default
#PBS -l walltime=24:10:00
#PBS -l file=1gb
#PBS -l nodes=1:ppn=1:gpus=1:default
#PBS -l mem=40MB
#PBS -j oe
#PBS -A PAS0108
#PBS -o Job_array.out
# Move to the directory where the job was submitted
# run the following cmd in shell to submit the job in an array
# qsub -t 1-6 myjob.sh
cd $PBS_O_WORKDIR
cd $TMPDIR
# $PBS_ARRAYID can be used as a variable
# copy data to local storage of the node
cp ~/code/2018_9_28_training_node_6_OSC/* $TMPDIR
cp -r ~/Datasets_1/Processed/PascalContext/ResNet_Output/ $TMPDIR
cp -r ~/Datasets_1/Processed/PascalContext/Truth/ $TMPDIR
cp -r ~/Datasets_1/Processed/PascalContext/Decision_Tree/ $TMPDIR
# currently in $TMPDIR, load modules
module load python/3.6 cuda
# using $PBS_ARRAYID as a variable to pass the corresponding node ID
python train_decision_tree_node.py $PBS_ARRAYID $TMPDIR > training_log_${PBS_ARRAYID}
# saving logs
cp training_log ${HOME}/logs/${PBS_ARRAYID}_node_log/
cp my_results $PBS_O_WORKDIR
I submit the above script with command line:
qsub -t 1-6 myjob.sh
But, I got an error from the cluster, somehow the local directory $TMPDIR
can't be recognized by the actual node in the cluster when I run the script.
Finally, what I did is that I use a top level bash script to submit each job with a different command line argument in a while loop and it worked:
run_multiple_jobs.tcsh:
#!/usr/bin/env tcsh
set n = 1
while ( $n <= 5 )
echo "Submitting job for concept node $n"
qsub -v NODE=$n job.pbs
@ n++
end
jobs.pbs:
#PBS -A PAS0108
#PBS -N node_${NODE}
#PBS -l walltime=160:00:00
#PBS -l nodes=1:ppn=1:gpus=1
#PBS -l mem=5GB
#PBS -m ae
#PBS -j oe
# copy data
cp -r ~/Datasets/Processed/PascalContext/Decision_Tree $TMPDIR
cp -r ~/Datasets/Processed/PascalContext/Truth $TMPDIR
cp -r ~/Datasets/Processed/PascalContext/ResNet_Output $TMPDIR
# move to working directory
cd $PBS_O_WORKDIR
# run program
module load python/3.6 cuda/8.0.44
python train_decision_tree_node.py ${NODE} $TMPDIR $HOME
# run with run_multiple_jobs.tcsh script
Upvotes: 0
Reputation: 13216
One way to achieve what you want is to create a top level script, mpi_train.py
using mpi4py. In an MPI job, each process has a unique rank and all run the same code, so running,
from mpi4py import MPI
comm = MPI.COMM_WORLD
print("Hello! I'm rank " + str(comm.rank))
with
mpiexec -n 4 python mpi_train.py
will give
Hello! I'm rank 0
Hello! I'm rank 1
Hello! I'm rank 3
Hello! I'm rank 2
The different ranks can then be used to read a separate file which specifies the args. So you'd have something like,
#All code in train should be in functions or __name__ == "__main__"
import train
from mpi4py import MPI
def get_command_args_from_rank(rank):
#Some code here to get args from unique rank no.
comm = MPI.COMM_WORLD
args = get_command_args_from_rank(comm.rank)
#Assuming the args can be passed to a run function
out = train.run(args)
Note that you should explicitly specify the output for each process, with something like,
with open("log.out"+str(comm.rank)) as f:
f.write(out)
otherwise all prints go to stdout and become jumbled as order of the various processes is not guaranteed.
Upvotes: 1