Reputation: 561
I have a 10x10 matrix and a vector with 10 elements. 10x10 matrix is randomly initialized with tf.random_uniform
; 10-vector is a constant.
I multiply the vector and the matrix with tf.matmul
and call the results logits. Then, I evaluate and print logits with logits.eval()
.
Then, I get the max from logits tensor and substitute it with 1, and all else with 0's. I evaluate this tensor with .eval()
and print the resulting tensor.
The output of evaluate is incorrect, in that the index for maximum value is not 1.
However, if I take the output of logits.eval()
and define a constant and then run the same code and evaluate, the result comes out ok. Following is the code:
tf.set_random_seed(1)
beta = tf.random_uniform([100], dtype=tf.float32, name="beta", seed=2)
beta = tf.reshape(beta, [10,10])
res = tf.constant([[0., 1., 2., 3., 4., 3., 2., 1., 0., 0.]], dtype=tf.float32)
logits = tf.Variable(tf.truncated_normal([1, 10]), name='logits')
sess1 = tf.Session()
sess1.run(tf.global_variables_initializer())
logits = tf.matmul(res, beta)
print(logits.eval(session=sess1))
tf.where(
tf.equal(tf.reduce_max(logits, axis=1, keepdims=True), logits),
tf.constant(1, shape=logits.shape),
tf.constant(0, shape=logits.shape)
).eval(session=sess1)
Output:
[[ 5.64927 11.539942 10.365061 6.367746 10.591797 10.503089
11.0828085 7.0345297 8.477502 8.649068 ]]
array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]], dtype=int32)
I think there's something that I'm not doing right but despite my spending significant amount of time debugging it, I'm not able to fix it. I'd appreciate any help. Thanks.
Upvotes: 1
Views: 151
Reputation: 10474
This is a common pitfall in Tensorflow. The problem is the way you define beta
as random_normal
but not as a variable. This will produce a new random beta
with each session.run
. Thus, the logits you print first are not the same you then do the 0-1 substitution with, since they result from multiplying a different beta with your constant vector. Defining beta
to be a tf.Variable
instead should fix this issue.
Upvotes: 2