Kubaba
Kubaba

Reputation: 146

Tensorflow doesn't update weights

I'm doing a tensoflow tutorial and I have some trouble on updating weights.

from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print("y [shape - %s]:" % (str(y.shape)), y[:10])#y [shape - (360,)]: [0 1 0 1 0 1 0 0 1 1]
print("X [shape - %s]:" % (str(X.shape)))#X [shape - (360, 64)]:

# inputs and shareds
shared_weights = tf.Variable(initial_value=tf.random_uniform([64]))#<student.code_variable()>
input_X = tf.placeholder(shape=(None,64),dtype="float32",name="features")
input_y = tf.placeholder(shape=(None,),dtype="float32",name="label")

reduced_sum=tf.reduce_sum(input_X*shared_weights/256, axis=1)
predicted_y = tf.nn.sigmoid(reduced_sum)#<predicted probabilities for input_X>
loss =tf.losses.log_loss(labels=input_y,predictions=predicted_y)#<logistic loss (scalar, mean over sample)>
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

#I don't understand how i have to implement this (I don't use it)
train_function = lambda X,y: s.run(optimizer,feed_dict={input_X:X,input_y:y})
                              #<compile function that takes X and y, returns log loss and updates weights>
predict_function =lambda X: s.run(predicted_y,feed_dict={input_X:X})  
    #<compile function that takes X and computes probabilities of y>

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)

from sklearn.metrics import roc_auc_score
s.run(tf.global_variables_initializer())

for i in range(5):
    #print(reduced_sum.eval(feed_dict={input_X:X_train}))
    #print(shared_weights.eval())
    s.run(optimizer,feed_dict={input_X:X_train,input_y:y_train})#    <run optimizer operation>
    loss_i = loss.eval(feed_dict={input_X:X_train, input_y:y_train}) #<compute loss at iteration i>

    print("loss at iter %i:%.4f" % (i, loss_i))

    print("train auc:",roc_auc_score(y_train, predict_function(X_train)))
    print("test auc:",roc_auc_score(y_test, predict_function(X_test)))

print ("resulting weights:")
plt.imshow(shared_weights.get_value().reshape(8, -1))
plt.colorbar();

this part of the tutorial is guided with jolly to fill. The batch ( a single minibatch) optimization was forced. I printed loss and weights but they don't change, why?

Upvotes: 3

Views: 243

Answers (2)

Seguy
Seguy

Reputation: 398

Use a bigger step, for example 50.0, in your optimizer and more iterations, for example 50.

The resulting weights that points Sergey are correct.

And is not need to call s.run and loss.eval replace it by

r = train_function(X_train,y_train)
loss_i = r[0]
shared_weights = r[1]

Upvotes: 1

Sergey Kovalev
Sergey Kovalev

Reputation: 1

No need for uniform initialization of weights, just use zeros

initial_value=tf.zeros([64])

resulting weights

Upvotes: 0

Related Questions