explodingfilms101
explodingfilms101

Reputation: 588

PyTorch: The number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)

I'm trying to convert a CPU model to GPU using Pytorch, but I'm running into issues. I'm running this on Colab and I'm sure that Pytorch detects a GPU. This is a deep Q network (RL).

I declare my network as: Q = Q_Network(input_size, hidden_size, output_size).to(device)

I ran into an issue when I tried to pass arguments through the network (It expected type cuda but got type cpu) so I add .to(device):

batch = np.array(shuffled_memory[i:i+batch_size])
b_pobs = np.array(batch[:, 0].tolist(), dtype=np.float32).reshape(batch_size, -1)
b_pact = np.array(batch[:, 1].tolist(), dtype=np.int32)
b_reward = np.array(batch[:, 2].tolist(), dtype=np.int32)
b_obs = np.array(batch[:, 3].tolist(), dtype=np.float32).reshape(batch_size, -1)
b_done = np.array(batch[:, 4].tolist(), dtype=np.bool)

q = Q(torch.from_numpy(b_pobs).to(device))
q_ = Q_ast(torch.from_numpy(b_obs).to(device))

maxq = torch.max(q_.data,axis=1)
target = copy.deepcopy(q.data)

for j in range(batch_size):
    print(target[j, b_pact[j]].shape) # torch.Size([])
    target[j, b_pact[j]] = b_reward[j]+gamma*maxq[j]*(not b_done[j])
# I run into issues here

Here is the error:

RuntimeError: expand(torch.cuda.FloatTensor{[50]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)

Upvotes: 1

Views: 18658

Answers (1)

Michael Jungo
Michael Jungo

Reputation: 33010

target[j, b_pact[j]] is a single element of the tensor (a scalar, hence size of torch.Size([])). If you want to assign anything to it, the right hand side can only be a scalar. That is not the case, as one of the terms is a tensor with 1 dimension (a vector), namely your maxq[j].

When specifying a dimension dim (axis is treated as a synonym) to torch.max, it will return a named tuple of (values, indices), where values contains the maximum values and indices the location of each of the maximum values (equivalent to argmax).

maxq[j] is not indexing into the maximum values, but rather the tuple of (values, indices). If you only want the values you can use one of the following to get the values out of the tuple (all of them are equivalent, you can use whichever you prefer):

# Destructure/unpack and ignore the indices
maxq, _ = torch.max(q_.data,axis=1)

# Access first element of the tuple
maxq = torch.max(q_.data,axis=1)[0]

# Access `values` of the named tuple
maxq  = torch.max(q_.data,axis=1).values

Upvotes: 3

Related Questions