Reputation: 602
how to upscale an image in Pytorch without defining height and width using transforms? ('--upscale_factor', type=int, required=True, help="super resolution upscale factor")
Upvotes: 3
Views: 11033
Reputation: 966
You can do
image_tensor = transforms.functional.resize(image_tensor, size=(image_tensor.shape[1] * 2, image_tensor.shape[2] * 2))
or read out width and height using color, height, width = image_tensor.size()
beforehand
check this example for reference on Resize
as well.
Upvotes: 0
Reputation: 46351
Here is one interesting example:
input = torch.tensor([[1.,2.],[3.,4.]])
input=input[None]
input=input[None]
output = nn.functional.interpolate(input, scale_factor=2, mode='nearest')
print(output)
Out:
tensor([[[[1., 1., 2., 2.],
[1., 1., 2., 2.],
[3., 3., 4., 4.],
[3., 3., 4., 4.]]]])
Upvotes: 0
Reputation: 715
If I understand correctly that you want to upsample a tensor x
by just specifying a factor f
(instead of specifying target width and height) you could try this:
from torch.nn.modules.upsampling import Upsample
m = Upsample(scale_factor=f, mode='nearest')
x_upsampled = m(x)
Note that Upsample
allows for multiple interpolation modes, e.g. mode='nearest'
or mode='bilinear'
Upvotes: 0
Reputation: 602
This might do the Job
transforms.Compose([transforms.resize(ImageSize*Scaling_Factor)])
Upvotes: 1