Reputation: 65
I'm running machine learning (ML) jobs that make use of very little GPU memory. Thus, I could run multiple ML jobs on a single GPU.
To achieve that, I would like to add multiple lines in the gres.conf file that specify the same device. However, it seems the slurm deamon doesn't accept this, the service returning:
fatal: Gres GPU plugin failed to load configuration
Is there any option I'm missing to make this work?
Or maybe a different way to achieve that with SLURM?
It is kind of smiliar to this one, but this one seems specific to some CUDA code with compilation enabled. Something which seems way more specific than my general case (or at least as far as I understand). How to run multiple jobs on a GPU grid with CUDA using SLURM
Upvotes: 3
Views: 573
Reputation: 59250
Besides nVidia MPS mentioned by @Marcus Boden, which is relevant for V100 types of cards, there also is Multi-Instance GPU which is relevant for A100 types of cards.
Upvotes: 0
Reputation: 1685
I don't think you can oversubscribe GPUs, so I see two options:
Upvotes: 3