skidjoe
skidjoe

Reputation: 649

Plugging in pre-trained model on top of embeddings from another pre-trained model, how to make input dimensions work?

I am experimenting with placing a pre-trained model (e.g. VGG, AlexNet, etc...) on top of the embeddings outputted from another model. I think the only unclear part for me is how would I go about making the input dimension work with that newly added pre-trained model? In more concrete terms:

  1. Grab the embeddings of images from pre-trained model 1
  2. Plug them into pre-trained model 2 to perform image classification
  3. Pre-trained model 2 requires RGB images of certain shape [3, x, x] while I only have the embeddings of shape[512].

Is there any way to get this to work, such that I can input an already processed image embedding into another pre-trained model and successfully perform image classification?

Upvotes: 0

Views: 217

Answers (1)

Ivan
Ivan

Reputation: 40648

If you already have extracted an embedding from your image, you should not be looking to use a CNN. A typical CNN architecture is comprised of a feature extractor (the convolutional layers) and a classifier (a fully connected layer). The purpose of the convolution part is to extract relevant information from the image while the latter section maps this information to accomplish the desired task (for instance classification task).

In your case using a fully connected layer as model 2 would make sense.

Upvotes: 0

Related Questions