Reputation: 5
I'm building a model to predict 1148 rows of 160000 columns to a number of 1-9. I've done a similar thing before in keras, but am having trouble transfering the code to tensorflow.keras. Running the program produces the following error:
(1) Resource exhausted: 00M when allocating tensor with shape(1148,1,15998,9) and type float......k:0/device:GPU:0 by allocator GPU_0_bfc.............. [[{{node conv1d/conv1d-0-0-TransposeNCHWToNWC-LayoutOptimizer}}]]
This is caused by the following code. It appears to be a memory issue, but I'm unsure why memory would be an issue. Advice would be appreciated.
num_classes=9
y_train = to_categorical(y_train,num_classes)
x_train = x_train.reshape((1148, 160000, 1))
y_train = y_train.reshape((1148, 9))
input_1 = tf.keras.layers.Input(shape=(160000,1))
conv1 = tf.keras.layers.Conv1D(num_classes, kernel_size=3, activation='relu')(input_1)
flatten_1 = tf.keras.layers.Flatten()(conv1)
output_1 = tf.keras.layers.Dense(num_classes, activation='softmax')(flatten_1)
model = tf.keras.models.Model(input_1, output_1)
my_optimizer = tf.keras.optimizers.RMSprop()
my_optimizer.lr = 0.02
model.compile(optimizer=my_optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=50, steps_per_epoch=20)
predictions = model.predict(x_test)
Edit: model.summary
Layer-Output shape-Param#
Input_1 (inputLayer) none, 160000,1. 0 Conv1d (Conv1D) none,159998, 9 36 flatten (Flatten) none,1439982. 0 dense (Dense) none, 9. 12959847
Total Params: 12,959,883 Trainable Params 12,959,883
Upvotes: 0
Views: 4363
Reputation: 46
This might sound very silly but in my case I was getting (1) Resource exhausted error because I didn't had enough space in my main harddrive. After cleaning out some space, my training scripts start working again.
Upvotes: 1
Reputation: 1681
Without more information it is hard to give a concrete answer.
Some things you can try:
batch_size=16
inside model.fit
(default is 32) (2x memory reduction)Upvotes: 1