Reputation: 25
I’m quite new to Pytorch. I was wondering how I could convert my tensor of size torch.Size([1, 3, 224, 224])
to display in an image format on a Jupyter notebook. A PIL format or a CV2 format should be fine.
I tried using transforms.ToPILImage(x)
but it resulted in a different format like this: ToPILImage(mode=ToPILImage(mode=tensor([[[[1.3034e-16, 1.3034e-16, 1.3034e-16, ..., 1.4475e-16,.
Maybe I’m doing something wrong :no_mouth:
Upvotes: 2
Views: 1976
Reputation: 2200
Since your image is normalized, you need to unnormalize it. You have to do the reverse operations that you did during normalization. One way is
class UnNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
# The normalize code -> t.sub_(m).div_(s)
return tensor
To use this, you'll need the mean and standard deviation (which you used to normalize the image). Then,
unorm = UnNormalize(mean = [0.35675976, 0.37380189, 0.3764753], std = [0.32064945, 0.32098866, 0.32325324])
image = unorm(normalized_image)
Upvotes: 4