Reputation: 21
I am trying to do transfer learning by re-training the InceptionV3 on medical images - grayscale 3D brain PET scans.
I have two challenges: converting my data from grayscale to an RGB image and formatting my 3D input data for the inception architecture.
I solved the first challenge that by stacking them into 3 channels(feeding the same image to all the 3 channels of the network).
The second challenge is still a problem: the network accepts 2D images. The current images dimensions are 79 x 95 x 79 x 3, where as the network would happily accepts 79 x 95 x 3 dimensional images.
What would be a good way to solve this problem, is it possible to feed the 3D images to the network or do they have to be converted to 2D. How do I convert the images to 2D?
In a research, grid method was used 8 2d images were extracted from each 3D image and displayed as a grid image for classification. Would this be the only way to go about converting from 3D to 2D, or are there alternatives?
Upvotes: 1
Views: 1033
Reputation: 498
There are two approaches to solve your 2nd problem.
Quick approach:
Find a way to reduce the dimension of 79 to 1.
There are different approaches for this. One way would be as you pointed out is to form a grid. An alternative is to do maximum intensity projection (MIP) across several of these images (example 3 or 10). It will depend on the resolution you have in this dimension. I have the feeling that the images you are describing are CT scans, in this case it would be clever to not take the full stack but just the dedicated images belonging to the part you are interested to classify.
You could feed part of the stack as MIPs with the same global class descriptors. That might work for the transfer learning.
Long and more difficult approach:
Find or redesign from scratch an architecture which accepts 3D images as an input. I am not aware of the current literature on that topic but a good starting example might be this: https://ai.googleblog.com/2020/02/ultra-high-resolution-image-analysis.html?m=1
Upvotes: 1