Reputation: 89
The HPC I am using requires me to run programs on the compute using Slurm. Currently I am locally running a Python script on my laptop that calls a bunch of bash commands in a loop. As an example, the Python file has a piece of code that looks like
command = "docker run -v %s:/usr/local/share/foo/ xyzw mpirun -n %d abcd %d /usr/local/share/foo/%s /usr/local/share/foo/%s" % (directory, numberOfCores, binaryPrecision, inFile, outFile)
command = shlex.split(command)
subprocess.call(command)
Here xyzw
is the Docker container and abcd
the program that I run using MPI. Now I would like to do something similar on the HPC but using Singularity and Slurm. My confusion is the following. Let's say I call the above Python file script.py. Then I have two options. I either keep the command inside the file as
command = "singularity exec -v ..."
and run the Python file using srun python3 script.py
or I change the command inside the file to command = "srun singularity exec -v ..."
and run the Python file from the login node as python3 script.py
.
Which of the two is the legal way of doing things? What I am worried the most about is that let's say I go with the option of srun python3 script.py
. Then will the resources I allocate to the script be utilised efficiently or is it going to relegate the singularity stuff to a single core, hence making the whole exercise futile?
Upvotes: 1
Views: 669
Reputation: 3782
Processes run in singularity, unless specifically configured otherwise, have full access to the hardware resources of the host OS. How those resources are used is determined by whatever application is running inside the container.
MPI is its own special beast, so I strongly suggest reading through the Singularity docs on how to use it. It also directly addresses how to use it with Slurm.
Upvotes: 1