Reputation: 213
I have built a CNN model that works perfectly, but when try to use model.predict on the test data, I get; When runnning on GPU:
(0) Resource exhausted: 2 root error(s) found.
(0) Resource exhausted: SameWorkerRecvDone unable to allocate output tensor.
Key: /job:localhost/replica:0/task:0/device:CPU:0;7a6cf13ce274a521;
/job:localhost/replica:0/task:0/device:GPU:0;ret_1;0:0
When running on CPU:
File "/home//programs/Neural/model_testing.py", line 224, in <module>
prediction_prob_matrix = model(test_generator,len(df_test), verbose=0)
File "/home/.conda/envs/my_env/lib/python3.9/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/.conda/envs/my_env/lib/python3.9/site-packages/keras/src/engine/input_spec.py", line 213, in assert_input_compatibility
raise TypeError(
TypeError: Inputs to a layer should be tensors. Got '<generator object generator at 0x7f7217fe3890>' (of type <class 'generator'>) as input for layer 'model'.
Since the issue is happening at the prediction level, I think it may be with the test data, but I cannot figure it out.
train_generator = generator (df_train,tokenizer,onehot,label_encoder, n_classes,batch_size)
validation_generator = generator (df_valid,tokenizer,onehot,label_encoder, n_classes,batch_size)
test_generator = generator (df_test,tokenizer,onehot,label_encoder, n_classes,batch_size)
model.summary()
model.fit(train_generator,
validation_data=validation_generator,
epochs=1,steps_per_epoch = len(df_train)//batch_size, validation_steps = len(df_valid)//batch_size, shuffle = True)
prediction = model.predict(test_generator, len(df_test), verbose=0)
# get the classes with the highest predicted probability, save them to our dataframe
df_test['lab'] = label_encoder.inverse_transform(prediction_prob_matrix)
# add the predicted probabilities
df_test['PREDICTED_PROB'] = prediction_prob_matrix.max(axis=1)
# take a look at what we've got
df_test.head()
How to resolve this issue?
Upvotes: 0
Views: 59