Reputation: 2193
Keras has this function called flow_from_directory and one of the parameters is called target_size. Here is the explanation for it:
target_size: Tuple of integers (height, width), default: (256, 256).
The dimensions to which all images found will be resized.
The thing that is unclear to me is whether it is just cropping the original image into 256x256 matrix (in this case we do not take the entire image) or it is just reducing the resolution of the image (while still showing us the entire image)?
If it is -let's say - just reducing the resolution: Assume that I have some xray images with the size 1024x1024 each (for breast cancer detection). And if I want to apply transfer learning to a pretrained Convolutional Neural Network which only takes 224x224 input images, will I not be loosing important data/information when I reduce the size of the image (and resolution) from 1024x1024 down to 224x224? Isn't there any such risk?
Thank you in advance!
Upvotes: 5
Views: 4785
Reputation: 13
Adding onto what the previous comments have mentioned, it indeed resamples/resizes your input image. If you were to go to the documentation page for tf.keras.preprocessing.image.ImageDataGenerator.flow_from_directory, you'll see an additional attribute in the function - interpolation
. This attribute decides how the "resampling" happens. While the default is nearest
, there are other options present.
Note - More information on some of the resampling options available at Comparison of Commonly Used Image Interpolation Methods.
Upvotes: 0
Reputation: 11
It is reducing the resolution of the image (while still showing us the entire image)
That is true that you are losing data, but you can work with an image size a bit larger than 224224 like 512 * 512 512 as it will keep most of the information and will train in comparatively less time and resources than the original image(10241024).
Upvotes: 0
Reputation: 3616
The best way for you is to rebuild your CNN to work with your original image size, i.e. 1024*1024
Upvotes: 4