Reputation: 10169
I am trying to approach a multi label image classification problem,for which i have image data but i also have some other features like gender etc, but the issue is that i will get this information during testing, in other words during testing only the image information will be provided.
My question is how can i use these extra features to help my image model which is a convolution Neural Network even though i wont have this info during testing?
Any advice will be helpful.Thanks in advance.
Upvotes: 1
Views: 1936
Reputation: 13498
This is a really open ended question. I can give you some general guidelines on how this can work.
keras
model API supports multiple inputs as well as merge
layers. For example you can have something like this:
from keras.layers import Input
from keras.models import Model
image = Input(...)
text = Input(...)
... # apply layers onto image and text
from keras.layers.merge import Concatenate
combined = Concatenate()([image, text])
... # apply layers onto combined
model = Model([image, text], [combined])
This way you can have a model
that takes multiple inputs that can make use of all of your data sources. keras
has tools to combine your different inputs to produce one output. The part where this becomes open ended is the architecture.
Right now you should probably pass image
through a CNN
, and then merge
the output with text
. You have to tweak the exact specifications, such as how you handle each input, your merge
method, and how you handle the combined output.
A good example of merge
being used is here, where a GAN is given latent noise in the form of an image but also a label to determine what kind of image it should generate. Both the discriminator
and the generator
make use of the multiply
merge layer to combine their inputs.
Upvotes: 4