I'm sK
I'm sK

Reputation: 33

can a dedicated GPU share to multiple kubernetes pods?

Is there a way we can share the GPU between multiple pods or we need some specific model of NVIDIA GPUS?

Upvotes: 1

Views: 2573

Answers (1)

Mikołaj Głodziak
Mikołaj Głodziak

Reputation: 5267

Short answer, yes :)

Long answer below :)

There is no "built-in" solution to achieve that, but you can use many tools (plugins) to control GPU. First look at the Kubernetes official site:

Kubernetes includes experimental support for managing AMD and NVIDIA GPUs (graphical processing units) across several nodes.

This page describes how users can consume GPUs across different Kubernetes versions and the current limitations.

Look also about limitations:

  • GPUs are only supposed to be specified in the limits section, which means: - You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default. - You can specify GPU in both limits and requests but these two values must be equal. - You cannot specify GPU requests without specifying limits.
  • Containers (and Pods) do not share GPUs. There's no overcommitting of GPUs.
  • Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.

As you can see this supports GPUs between several nodes. You can find the guide how to deploy it.

Additionally, if you don't specify this in resource / request limits, the containers from all pods will have full access to the GPU as if they were normal processes. There is no need to do anything in this case.

For more look also at this github topic.

Upvotes: 1

Related Questions