timudk
timudk

Reputation: 43

Freezing weight of neural network such that its output takes a particular value at a particular point (tensorflow)

Let's say I have a neural network that looks like this

def neural_net(x):
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.sigmoid(layer_1)

    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    layer_2 = tf.nn.sigmoid(layer_2)

    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

Is there a way in tensorflow to fix the weights in such a way that neural_net(a) always returns b (where a,b are real numbers), e.g., f(1) = 0?

Upvotes: 1

Views: 113

Answers (1)

CAFEBABE
CAFEBABE

Reputation: 4101

Sure, however, the answer depends a bit on the purpose.

The easiest solution is to just scale the output. For example by running the result through a linear regressor. While this gives the desired result it is probably not what you want.

However, probably the better way is to integrate this additional objective in the loss function during training. This way you can trade off between your additional requirement and fitting the weights of your neural network. A generic description how to adapt the loss you can find at https://www.tensorflow.org/api_guides/python/contrib.losses

images, labels = LoadData(...)
predictions = MyModelPredictions(images)

weight = MyComplicatedWeightingFunction(labels)
weight = tf.div(weight, tf.size(weight))
loss = tf.contrib.losses.mean_squared_error(predictions, depths, weight)

The weight for your special case need to be extremely high. That way your critiria is not full guarantee but very likely.

In addition you need to rewrite the mini-batching mechanism to inject in each batch your (x,y) = (1,0) example

Upvotes: 1

Related Questions