Reputation: 307
I'm having an issue with my numpy array, which is of size (29000,200,1024)(7Go). Its the features of the images of my dataset.
Once loaded, my function receives the indexes to build the current batch as a tensor. Unfortunately, using :
tf.gather(array, indices)
freezes. Though printing for example array[0] work instantly.
I tried to transform my numpy array with convert_to_tensor
so I can use directly array_tensor(indice) but again, convert_to_tensor
leads to a memory limit error.
Any work around ?
Thank you very much
Upvotes: 0
Views: 460
Reputation: 57983
Passing numpy array directly into tf op construction API converts it to tf.constant
op which contains data in the op definition, so you are inlining the whole thing into GraphDef, subject to 2GB GraphDef limit.
To avoid this, create var=tf.Variable(my_placeholder)
and initialize this variable by running var.initializer, feed_dict={my_placeholder: np_array}
. This puts numpy array data directly into variable store.
Upvotes: 2