Reputation: 9437
I implemented a simple logistic regression. Before running the training algorithm, I created a placeholder for my weights where I initialized all the weights to 0...
W = tf.Variable(tf.zeros([784, 10]))
After initializing all my variables correctly, the logistic regression is implemented (which I've tested and runs correctly)...
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples/batch_size)
# loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
# compute average loss
avg_cost += c / total_batch
# display logs per epoch step
if (epoch + 1) % display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
My issue is, I need to extract the weights used in the model. I used the following for my model...
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
I tried extracting the following way...
var = [v for v in tf.trainable_variables() if v.name == "Variable:0"][0]
print(sess.run(var[0]))
I thought that the trained weights would be located in tf.training_variables()
, however when I run the print
function, I get an array of zeroes.
What I want, is the all sets of weights. But for some reason I am getting arrays of zeroes instead of the actual weights of the classifier.
Upvotes: 1
Views: 1093
Reputation: 48330
Is much easier, just evaluate the weights with the run function and you will get back the numpy array with the values:
sess.run([x, W, b])
Upvotes: 1
Reputation: 5115
The variable W
should refer to the trained weights. Please try simply doing: sess.run(W)
Upvotes: 1