Reputation: 19243
I have an image which I would like to resize (upscale). I don't want extra gray levels to be introduced in my image. That's why I used the nearest neighbour interpolation as follows:
scipy.misc.imresize(image, image2.shape, interp="nearest",mode="L")
Original gray levels in the image:
[ 0 2 4 5 8 9 10 11 12 14 15 16 17 18 19 20 21 22 23 25 26 27 28 29 30
31 32 35 36 37 38 41 43 45 46 47 51]
After interpolation:
[ 0 10 20 25 40 45 50 55 60 70 75 80 85 90 95 100 105 110
115 125 130 135 140 145 150 155 160 175 180 185 190 205 215 225 230 235
255]
I also tried changing the mode, but it didn't help. I have no clue that how to fix it.
Upvotes: 2
Views: 780
Reputation: 114781
imresize
uses PIL or Pillow to do the actual work. It is the conversion to a PIL image with mode 'L' that triggers the rescaling of the data values. If the input data type is not 8 bit, the values are scaled to fill the 8 bit range.
One way to avoid this is to ensure that the input array has data type numpy.uint8
. Then the values are not rescaled.
For example, here is a 3x4 image with 64 bit values (i.e. the data type of the array is numpy.int64
):
In [132]: img
Out[132]:
array([[ 1, 1, 2, 17],
[ 4, 3, 1, 2],
[ 1, 5, 4, 2]])
Here's what happens when this array is passed to imresize
with mode='L'
:
In [133]: imresize(img, (6, 8), interp='nearest', mode='L')
Out[133]:
array([[ 0, 0, 0, 0, 16, 16, 255, 255],
[ 0, 0, 0, 0, 16, 16, 255, 255],
[ 48, 48, 32, 32, 0, 0, 16, 16],
[ 48, 48, 32, 32, 0, 0, 16, 16],
[ 0, 0, 64, 64, 48, 48, 16, 16],
[ 0, 0, 64, 64, 48, 48, 16, 16]], dtype=uint8)
If instead the input is first cast to np.uint8
, the values are not rescaled:
In [134]: imresize(img.astype(np.uint8), (6, 8), interp='nearest', mode='L')
Out[134]:
array([[ 1, 1, 1, 1, 2, 2, 17, 17],
[ 1, 1, 1, 1, 2, 2, 17, 17],
[ 4, 4, 3, 3, 1, 1, 2, 2],
[ 4, 4, 3, 3, 1, 1, 2, 2],
[ 1, 1, 5, 5, 4, 4, 2, 2],
[ 1, 1, 5, 5, 4, 4, 2, 2]], dtype=uint8)
Upvotes: 5