Reputation: 33
I am writing this piece of code to calculate the memory taken by weights
in bytes:
import tensorflow as tf
import sys
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# Weights & bias
weights = tf.Variable(tf.random_normal([n_input, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
model = tf.global_variables_initializer()
with tf.Session() as session:
session.run(model)
print(session.run(weights))
print(sys.getsizeof(session.run(weights)))
#31472
That doesn't seem to help in finding out the size of weights
.
Can someone please suggest the correct approach?
Thanks.
Upvotes: 1
Views: 1773
Reputation: 4451
I think the amount of bytes you get is correct. The weights is a matrix of 784 by 10. Each value consists of four bytes. This gives you 784*10*4=31360 bytes. Sys says that it is 31472 - 31360 = 112 bytes more. This looks like a nice overhead (28 4 byte values) to me!
Let us know if you have more questions!
Upvotes: 4