user2415706
user2415706

Reputation: 972

nvidia-smi executable file not found on docker in WSL

I set up CUDA on WSL2 Ubuntu 20.04 and am able to successfully run commands like:

docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

and

docker run -it --gpus all -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter

but

docker run --gpus all --rm nvidia/cuda:10.0-runtime nvidia-smi

gives me this error and I do not have a good mental model of how docker works:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "nvidia-smi": executable file not found in $PATH: unknown.

This command works outside of docker.

nvidia-smi

Upvotes: 0

Views: 2656

Answers (1)

Robert Fischer
Robert Fischer

Reputation: 1443

Whether nvidia-smi works outside of Docker is irrelevant. The error message is telling you that the image nvidia/cuda:10.0-runtime does not have nvidia-smi on the $PATH, which probably means it doesn't have it installed at all. If the nvidia-smi executable is on the image but not on the $PATH, then you just need to provide the absolute path to the executable. If the executable is not on the image, then you need to use a different image which does have nvidia-smi on the $PATH, either by extending nvidia/cuda:10.0-runtime via a Dockerfile or by using a different image.

(Since nvidia-smi is really for development and sysadmin purposes, it doesn't surprise me that something labelled runtime is missing it.)

Upvotes: 1

Related Questions