Arpit Kathuria
Arpit Kathuria

Reputation: 414

Convert image to proper dimension PyTorch

I have an input image, as numpy array of shape [H, W, C] where H - height, W - width and C - channels.
I want to convert it into [B, C, H, W] where B - batch size, which should be equal to 1 every time, and changing the place for C.

_image = np.array(_image)
h, w, c = _image.shape
image = torch.from_numpy(_image).unsqueeze_(0).view(1, c, h, w)

So, will this preserve the image properly i.e without displacing the original image pixel values?

Upvotes: 4

Views: 11534

Answers (1)

twolffpiggott
twolffpiggott

Reputation: 1103

I'd prefer the following, which leaves the original image unmodified and simply adds a new axis as desired:

_image = np.array(_image)
image = torch.from_numpy(_image)
image = image[np.newaxis, :] 
# _unsqueeze works fine here too

Then to swap the axes as desired:

image = image.permute(0, 3, 1, 2)
# permutation applies the following mapping
# axis0 -> axis0
# axis1 -> axis3
# axis2 -> axis1
# axis3 -> axis2

Upvotes: 6

Related Questions