Reputation: 466
I'm trying to set up GANG scheduling with Slurm on my single-node server so the people at the lab can run experiments without blocking each other (so if say someone has to run some code that takes days to finish, other jobs that take less have the chance to run alternated with it and so they don't have to wait days until they can run)
I followed the GANG scheduling slurm.conf setup tutorial at the slurm site, and to check if it's working properly I launched a bunch of jobs that print the current time and then sleep for a while. But when I check squeue, the jobs never alternate, they run sequentially one after the other. Why is this happening?
Here's my slurm.conf
file:
# See the slurm.conf man page for more information.
ClusterName=localcluster
SlurmctldHost=localhost
ProctrackType=proctrack/linuxproc
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm/slurmd
SlurmUser=slurm
StateSaveLocation=/var/lib/slurm/slurmctld
TaskPlugin=task/none
# TIMERS
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
# SCHEDULING
#Seteado para que cada laburo alterne cada 15 segundos
SchedulerTimeSlice=15
SchedulerType=sched/builtin
SelectType=select/linear
SelectTypeParameters=CR_Memory
PreemptMode=GANG
# LOGGING AND ACCOUNTING
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurm/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurm/slurmd.log
# COMPUTE NODES
NodeName=lab04 CPUs=48 CoresPerSocket=12 ThreadsPerCore=2
State=UNKNOWN RealMemory=257249
PartitionName=LocalQ Nodes=ALL Default=YES MaxTime=INFINITE State=UP OverSubscribe=FORCE:6 DefMemPerNode=257249 MaxMemPerNode=257249
From what I understand, SchedulerTimeSlice=15
means that jobs should alternate running every 15 seconds.
This is the job I'm launching with sbatch
(launching many copies of this job one after the other):
#!/bin/bash #SBATCH -J test # Job name
#SBATCH -o job.%j.out # Name of stdout output file (%j expands to %jobId)
#SBATCH -N 1 # Total number of nodes requested
echo "Test output from Slurm Testjob"
date
sleep 10
date
sleep 10
date
sleep 10
date
sleep 10
I would expect jobs to print one or two dates, then the Slurm scheduler comes and lets another job run in the meantime, and then the final print(s) come with a delay way greater than 10secs
However after launching many copies of this job this is what I see on squeue
:
Before the first job launched is done:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
18 LocalQ test eiarussi PD 0:00 1 (Resources)
19 LocalQ test eiarussi PD 0:00 1 (Priority)
20 LocalQ test eiarussi PD 0:00 1 (Priority)
21 LocalQ test eiarussi PD 0:00 1 (Priority)
17 LocalQ test eiarussi R 0:31 1 lab04
After that job ends:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
19 LocalQ test eiarussi PD 0:00 1 (Resources)
20 LocalQ test eiarussi PD 0:00 1 (Priority)
21 LocalQ test eiarussi PD 0:00 1 (Priority)
18 LocalQ test eiarussi R 0:02 1 lab04
Why did the first job run for 30+ seconds uninterrupted, instead of another job being allowed to run after 15 seconds? Am I misunderstanding how GANG scheduling works, or is the a problem in my conf file?
Upvotes: 1
Views: 229
Reputation: 59360
Beware that gang scheduling works by default by suspending the process linked to the job, meaning that those processes still reside in memory. Slurm therefore does not oversubscribe memory to avoid swapping and crashes.
Given that your submission script do not specify memory, that the default DefMemPerNode=257249
is equal to the entire memory of the node (RealMemory=257249
), each job is assigned the entire memory, preventing to jobs from being colocated on the same node.
Try requesting less than half the memory for each job.
Upvotes: 2