Reputation: 3636
I face the following problem in Tensorflow: I constructed a network with the following layer sizes:
input_0 = 100
input_1 = 1000
output = 10
The entire network looks like:
[input_0, 300, 500, input_1, 800, 400, output]
Now I feed data to input_0
and run an optimizing step afterwards. Here I want to use the whole network which is straight forward. But after doing that I also want to be able to feed some data into input_1
and run an optimizing step which runs backwards but only until the input_1
layer. Is that even possible? I mean there should be a way to do that.
In short: How to train networks [input_0,...,output]
and [input_1,...,output
] independently of each other when they are both part of the same graph?
I tried to implement it in Tensorflow which resulted in many errors. I also tried to split the network into two networks. But than I don't know how to connect them properly.
Any suggestion?
Upvotes: 1
Views: 255
Reputation: 1785
The optimizers accept a var_list
parameter which will let you only update some weights.
See the documentation for GradientDescentOptimizer
here.
Upvotes: 1