Reputation: 71
I trained a faster-rcnn model on the tensorflow object detection API on a custom dataset. I found that the loss is ~2 after 3.5k steps. However, when I ran eval.py, the mAP scores are all almost 0 as shown below.
I do not understand why this is the case. However, when I look at the images at 3.5k steps, the model has captured some of the boxes as shown below
Can someone please explain why the mAP scores are close to zero, even though the model has learned to output quite a few boxes?
Upvotes: 1
Views: 1246
Reputation: 445
The images are showing [email protected] scores of each category not mAP. As you can see in PerformanceByCategory, it gets score of each category in your case it shows 'awning-tricycle', 'bicycle', 'bus', 'car', 'ignored regions' etc...
As in the image output, the categories detected such as 'car', 'motor', 'pedestrian' are showing AP scores deviated from zero, while the rest category AP scores are zero. It means that the model still has not found the corresponding categories in the testing images.
This could be due to many variables in your experiment. Here are some of the questions you could ask yourself. How many training images have you distributed per category ? The ratio of training images per category should be more or less equal and also the testing images per category as well. If there are more training images for cars and pedestrians then the model would be more likely to pick up car and pedestrian objects and therefore shows non-zero AP scores for them whereas bicycle is showing zero AP score.
Upvotes: 1