Reputation: 323
I have some malaria infected blood cell images (multiple blood cells per image) with details of:
num_features = 929
num_features_train = 743
num_features_test = 186
depth = 24
channels = 3
width = 1600
height = 1200
In the previous image classification I did on a malaria infected blood cell image datasets (single blood cell per image) the image details were width = 100, height = 101, depth = 24
. So resizing to 50x50 didn't seem an issue.
I need to resize this, obviously, but don't know how to choose the best size for resizing an image so large. I can't find anything in my online searching talking about this. Any advise/experience would be helpful and greatly appreciated. Thanks!!
p.s. I already figured out that if I don't resize something this large I will get a memory error. Yeah, probably. :) MemoryError: Unable to allocate 6.64 GiB for an array with shape (929, 1600, 1600, 3) and data type uint8
p.p.s. resized to 100x100 and still got memory error resized to 50x50 and it was OK
So, I guess my question is, doesn't reducing the size so much reduce the resolution? So how does the convolutional layer filters do proper filtering if the resolution is reduced so drastically?
Upvotes: 1
Views: 1784
Reputation: 1873
Reducing the size reduces the resolution, but it still can keep all important features of the original image. Smaller images = fewer features = quicker training, less overfishing. However, a too drastic drop in size may cause images to lose the point of interest. For example, after resizing, a tumor may be smoothened by the surrounding pixels and disappear.
Overall: if images keep the point of interest after resizing, it should be OK.
Upvotes: 1