Avijit Dasgupta
Avijit Dasgupta

Reputation: 2065

Semantic Segmentation using deep learning

I have a 512x512 image. I want to perform per-pixel classification of that image. I have already trained the model using 80x80 patches. So, at the test time I have 512x512=262144 patches each with dimension 80x80 and this classification is too slow. How to improve the testing time? Please help me out.

Upvotes: 1

Views: 326

Answers (1)

FiReTiTi
FiReTiTi

Reputation: 5878

I might be wrong, but there are not a lot of solution to speed up the testing phase, the main one being to reduce the NN number of neurons in order to reduce the number of operations:

  • 80x80 patches are really big, you may want to try to reduce their size and retrain your NN. It will already reduces a lot the number of neurons.
  • Analyze the NN weights/inputs/outputs in order to detect the neurons that do not matter in your NN. They may for example always return 0, then they can be deleted from your NN. Then you retrain your NN with the simplified architecture.
  • If you have not done that already, it's much faster to give a batch (the bigger the better) of patches instead of one patch at a time.

Upvotes: 1

Related Questions