Reputation: 37
I am trying to train a model with my own data by using GPU. There are 8 classes of objects to be detected, however when I start the training I am getting an allocation error.
I have tried to use different pre-trained models, but it continued to give me the same error. I have also tried to reduce the "batch-size" in the .config file, but it only worked when I equal the batch-size to 1.
I've followed this link to train a model with custom data: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#training-the-model
Is my hardware not enough or something else?
GPU: Nvidia GTX 1650
CPU: Intel i7 9750H
Memory: 16 GB
Upvotes: 0
Views: 313
Reputation: 46
Well your hardware certainly seems impressive, I doubt there'll be a problem with that. What you can do is probably look at other sites, this is what I found:
1.https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough 2.https://towardsdatascience.com/custom-object-detection-using-tensorflow-from-scratch-e61da2e10087
Upvotes: 1