Reputation: 70
We have a cluster with 1 node, 200 CPU cores and 2T RAM. The server are shared by 15+ people and required to submitted job by slurm.(computing node and login node are on same machine). But some people are unwilling to do so!
So, is there a way to limit the resources of user's process submitted by cmd, but no by slurm?
For example, a no-slurm job shouled be restricted with CPU:2, RAM:4G
;
$ resource-consuming-program # job submitted by cmd should be restricted.
$ cat slurmjob.sh
#!/bin/sh
#SBATCH -J TEST
#SBATCH --cpus-per-task=1
#SBATCH --mem=700G
# We recommend using SLURM to run resource-consuming job.
resource-consuming-program
$ sbatch slurmjob.sh # job submitted by SLURM won't be restricted.
All in all, we just want to limit which tasks that are not submitted by SLURM. Thanks. ☺️
Upvotes: 1
Views: 71
Reputation: 59260
Here is a ad-hoc solution to your problem: https://unix.stackexchange.com/questions/526994/limit-resources-cpu-mem-only-in-ssh-session. The idea there is to constrain users in a cgroup
whenever they are in an SSH
session.
Other than that, there is a tool called Arbiter2 that was created for the purpose of controlling usage resources on login nodes.
Upvotes: 1