Reputation: 48
I've simple model like this:
n_input = 14
n_out = 1
weights = {
'out': tf.Variable(tf.random_normal([n_input, n_out]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_out]))
}
def perceptron(input_tensor, weights, biases):
out_layer_multiplication = tf.matmul(input_tensor, weights['out'])
out_layer_addition = out_layer_multiplication + biases['out']
return out_layer_addition
input_tensor = rows
model = perceptron
"rows" dimension is (N, 14) and "out" dimension is (N), where "out" is result of running model with "rows" as "input_tensor".
And I want to calculate loss in tensorflow. Algorythm of calculating is:
ls = 0
for i in range(len(out)-1):
if out[i] < out[i+1]:
ls += 1
Where "ls" is model loss. How can I calculate it in tensorflow notation?
Upvotes: 1
Views: 1866
Reputation: 1104
You can do something like this:
l = out.get_shape()[0]
a = out[0:l-1]
b = out[1:l]
c = tf.where(a<b, tf.ones_like(a), tf.zeros_like(a))
return tf.reduce_sum(c)
In practice, a
contains out[i]
and b
contains out[i+1]
. c
has 1s every time out[i]<out[i+1]
. So summing them is equal to make a +1 each time.
Upvotes: 1