ALU
ALU

Reputation: 374

torch.cuda.is_avaiable returns False with nvidia-smi not working

I'm trying to build a docker image that can run using GPUS, this my situation: situation inside docker images

I have python 3.6 and I am starting from image nvidia/cuda:10.0-cudnn7-devel. Torch does not see my GPUs.

nvidia-smi is not working too, returning error:

> Failed to initialize NVML: Unknown Error
> The command '/bin/sh -c nvidia-smi' returned a non-zero code: 255

I installed nvidia toolkit and nvidia-smi with

 RUN apt install nvidia-cuda-toolkit -y
 RUN apt-get install nvidia-utils-410 -y

Upvotes: 0

Views: 2050

Answers (1)

ALU
ALU

Reputation: 374

I figured out the problem is you can't use nvidia-smi during building (RUN nvidia-smi). Any check related to the avaiability of the GPUs during building won't work.

Using CMD bin/bash and typing the command python3 -c 'import torch; print(torch.cuda.is_available())', I finally get True. I also removed

RUN apt install nvidia-cuda-toolkit -y
RUN apt-get install nvidia-utils-410 -y

as suggested from @RobertCrovella

Upvotes: 3

Related Questions