Reputation: 1001
Suppose I have a Fully Connected Neural Network like this in Tensorflow:
input_1 = tf.placeholder(tf.float32, [batchSize, numInputs])
input_2 = tf.placeholder(tf.float32, [batchSize, numInputs2])
fc_1 = tf.contrib.layers.fully_connected(input_1, num_outputs=128, activation_fn=tf.nn.relu)
fc_2 = tf.contrib.layers.fully_connected(input_2, num_outputs=128, activation_fn=tf.nn.relu)
fc_3 = tf.contrib.layers.fully_connected(fc_1 + fc_2, num_outputs=100, activation_fn=tf.nn.relu)
output = tf.nn.softmax(fc_3)
This network takes two inputs, each of which go through a fully connected layer. Then, these are added up, and passed through another fully connected layer, and finally a softmax is computed.
Let's say I train this network on some task, and then for another task, I want to use the same network, but not input_2
. Thus, I will not have input_2
. I am ok with ignoring fc_2
or not ignoring it, but the rest of the network will be the same as before. Is this possible? If so, how can you do it, and if not why not? I don't want to just save the parameters of this model and load it into the other model.
Thanks
Upvotes: 0
Views: 30
Reputation: 60
For the second network, you can pass a tensor of zeros as the values of input_2
. Then your fc_2
will be all zeros and when you sum it with fc_1
, it will have no effect.
Upvotes: 1