Reputation: 31
I use a p2 instance on AWS, that is supposed to have a Tesla K80 gpu, with two GK210 GPUs inside it (https://blogs.nvidia.com/blog/2014/11/18/tesla-k80-perf/).
According to the following post from Nvidia forums, I should be able to see and access each of the two devices separately (https://devtalk.nvidia.com/default/topic/995255/using-tesla-k80-as-two-tesla-k40/?offset=4).
However, when I run nvidia-smi on the p2 instance, I only see one device:
[ec2-user@ip-172-31-34-73 caffe]$ nvidia-smi
Wed Feb 22 12:20:51 2017
+------------------------------------------------------+
| NVIDIA-SMI 352.99 Driver Version: 352.99 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 On | 0000:00:1E.0 Off | 0 |
| N/A 34C P8 31W / 149W | 55MiB / 11519MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
How can I monitor and access the 2 devices?
Upvotes: 2
Views: 810
Reputation: 152123
The actual situation with a p2.xlarge instance is that you have 1/2 of a K80 assigned to that VM, so your nvidia-smi
output here is expected, and you will not be able to access 2 GPU devices from that VM/instance type.
Upvotes: 4