Mohamed Abdelhafez
Mohamed Abdelhafez

Reputation: 31

Tensorflow graph size

I am wondering if there is an easy way to check the size/memory needed of a tensorflow graph before running a tensorflow session.

I am looking for something where I can keep changing my system parameters that define the graph and can see how big (in memory) the graph becomes accordingly.

Upvotes: 3

Views: 3184

Answers (2)

zbyte
zbyte

Reputation: 3815

Given tensor t:

var_sizes = [np.product(list(map(int, v.get_shape())))*v.dtype.size
             for key in t.graph.get_all_collection_keys() for v in f.graph.get_collection_ref(key)]
print(sum(var_sizes)/(1024**2), 'MB')

Upvotes: 1

chasep255
chasep255

Reputation: 12175

I have done something similar where I wanted to see the number of parameters in my model.

vars = 0
for v in tf.all_variables():
    vars += np.prod(v.get_shape().as_list())
print(vars)

Now vars contains the sum of the product of the dimensions of all the variables in your graph. If each variable is of type tf.float32 you can multiply vars by 4 to get the number of bytes consumed by all of the variables. This however is just a lower bound and there will be some additional overhead. Also I think computing the gradients requires a lot of memory since it needs to store the activations at each point in the model for the backwards pass.

Upvotes: 1

Related Questions