Reputation:
Let say I have batch with
imgs = torch.Size([128, 1, 28, 28])
if I want to loop through the each image
for img in imgs:
print(img.shpae) -> torch.Size([1, 28, 28])
if I want to get a torch.Size([1,1, 28, 28]) for each image what should I do?
Upvotes: 0
Views: 707
Reputation: 7693
unsqueeze Just pass dim, In which position you want to add one extra singleton dimension.
imgs = torch.zeros([128, 1, 28, 28])
# dim (int) – the index at which to insert the singleton dimension
imgs.unsqueeze_(dim = 1)
imgs.shape
>>> torch.Size([128, 1, 1, 28, 28])
Upvotes: 1
Reputation: 1902
You can resize the tensor initially to a shape of [128, 1, 1, 28, 28]
# tensor.resize_((`new_shape`))
imgs.resize_((128, 1, 1, 28, 28))
No when you loop through each image, will be of the desired shape [1, 1, 28, 28].
Secondly, if you don't want to change the original data, reshape each image individually
# tensor.resize_((`new_shape`))
img.resize_((1, 1, 28, 28))
Have a look at the PyTorch documentation
Upvotes: 1