Reputation: 1547
I'm working in HPC environment and I'm using SLURM to submit my job to the queue. I'm writing my own memory caching mechanism and hence I want to know how much memory is available per node so that I can expand or reuse space.
Is there a way to know how much memory is available. Does SLURM sets up any environment variables.
Upvotes: 1
Views: 674
Reputation: 1547
In my question I incorrectly stated I want to access memory available per node. My MPI tasks are mapped to 1 cpu, so I actually needed to access memory available per cpu.
If you are submitting job through sbatch
you can access --mem-per-cpu
using environment variable SLURM_MEM_PER_CPU
, documented here: https://slurm.schedmd.com/sbatch.html
If memory available in node is required, SLURM api documented at https://slurm.schedmd.com/api.html can be used, as mentioned by @siserte and @damienfrancois
Upvotes: 1
Reputation: 59250
Several options:
If cgroups are setup, you can get that information simply reading file
/cgroup/memory/slurm/uid_<UISERID>/job_<JOBID>/memory.limit_in_bytes
on each node.
Otherwise, using the SLURM API as @siserte suggested can work.
Or querying the rlimits using getrlimit(2) should also work.
Upvotes: 3