gary69
gary69

Reputation: 4250

AWS EC2 Deep Learning instance cuda 3.0

I just launched (and paid for) the Deep Learning AMI (Ubuntu 18.04) Version 27.0 (ami-0dbb717f493016a1a) instance type g2.2xlarge. I activated

for PyTorch with Python3 (CUDA 10.1 and Intel MKL) ____________source activate pytorch_p36

When I run my pytorch network I see a warning

/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/cuda/__init__.py:134: UserWarning: 
    Found GPU0 GRID K520 which is of cuda capability 3.0.
    PyTorch no longer supports this GPU because it is too old.
    The minimum cuda capability that we support is 3.5.

Is this real?

This is my code to put my neural net on the gpu

if torch.cuda.is_available():
        device = torch.device("cuda:0")  # you can continue going on here, like cuda:1 cuda:2....etc. 
        print("Running on the GPU")
    else:
        device = torch.device("cpu")
        print("Running on the CPU")

    net = Net(image_height, image_width)
    net.to(device)

Upvotes: 0

Views: 678

Answers (1)

gary69
gary69

Reputation: 4250

I had to use a g3s.xlarge instance. I guess the g2 instances use older GPUs.

Also I had to make num_workers=0 on my dataloaders following this https://discuss.pytorch.org/t/oserror-errno-12-cannot-allocate-memory-but-memory-usage-is-actually-normal/56027.

And this is another pytorch gotcha https://stackoverflow.com/a/51606286/3614578 when adding tensors to a device.

Upvotes: 1

Related Questions