Gofrane Haj Ahmed
Gofrane Haj Ahmed

Reputation: 521

Can kubernetes shared single GPU between pods?

Is there a possibility to share a single GPU between kubernetes pods ?

Upvotes: 22

Views: 14332

Answers (6)

Prunus Cerasus
Prunus Cerasus

Reputation: 91

A solution can be to partition an Nvidia GPU into fully isolated instances with their own high-bandwidth memory, cache, and compute cores - https://www.nvidia.com/en-us/technologies/multi-instance-gpu/.

Upvotes: 0

Tardis Xu
Tardis Xu

Reputation: 1167

Yes, you can use nano gpu for sharing gpu of nvidia.

Upvotes: 0

GioGio
GioGio

Reputation: 508

Yes it's possible by making some changes to the scheduler, someone on github kindly open-sourced their solution, take a look here: https://github.com/AliyunContainerService/gpushare-scheduler-extender

Upvotes: 3

Adam
Adam

Reputation: 2917

Yes, it is possible - at least with Nvidia GPUs.

Just don't specify it in the resource limits/requests. This way containers from all pods will have full access to the GPU as if they were normal processes.

Upvotes: 10

iamvishnuks
iamvishnuks

Reputation: 105

Official docs says pods can't request fraction of CPU. If you are running machine learning application in multiple pods then you have to look into kubeflow. Those guys have solved this issue.

Upvotes: 1

Mauro Baraldi
Mauro Baraldi

Reputation: 6575

As the official doc says

GPUs are only supposed to be specified in the limits section, which means:

You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default.

You can specify GPU in both limits and requests but these two values must be equal.

You cannot specify GPU requests without specifying limits. Containers (and pods) do not share GPUs. There’s no overcommitting of GPUs.

Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.

Also, you can follow this discussion to get a little bit more information.

Upvotes: 11

Related Questions