Marcel
Marcel

Reputation: 381

Pytorch/cuda : CPU error and map_location

I write this code to download my model :

args = parser.parse_args()

use_cuda = torch.cuda.is_available()

state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()

if use_cuda:
    print('Using GPU')
    model.cuda()
else:
    print('Using CPU')

But my terminal returns the following error RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

So then I tried to write without really understanding too much :

args = parser.parse_args()

map_location=torch.device('cpu')
state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()

But I still have the same mistake. Do you see please how I can correct it? (actually I want to load my model with my CPU).

Upvotes: 2

Views: 1939

Answers (1)

GaiusJulius
GaiusJulius

Reputation: 83

I'm assuming you saved the model on a computer with a GPU and are now loading it on a computer without one, or maybe you for some reason the GPU is not available. Also, which line is causing the error?

The parameter map_location needs to be set inside torch.load. Like this:

state_dict = torch.load(args.model, map_location='cpu')

or

map_location=torch.device('cpu')
state_dict = torch.load(args.model, map_location=map_location)

Notice that you need to send the map_location variable to the torch.load function.

Upvotes: 2

Related Questions