Reputation: 2123
I'm not sure if this is a misunderstanding on my part or a bug in the TF Object Detection (OD) API code, but I figured I'd try here first before posting to github.
Basically, I am comparing 2 models in tensorboard, red vs green. I find that red model is slightly better at overal mAP, [email protected], & [email protected]. However, green is better at all the mAPs split by object size: mAP large, medium, and small (see image below at 67.5k steps, where blue arrow is).
Now I don't have a PhD in math, but my assumption was that if a model has higher mAP w/ small medium and large objects, it should have a higher overall mAP...
Here are the exact values: (All values obtained at 67.5k steps, without any smoothing)
Red Green
mAP .3599 .3511
[email protected] .5670 .5489
[email protected] .3981 .3944
mAP (large) .5557 .7404
mAP (medium) .3788 .3941
mAP (small) .1093 .1386
Upvotes: 1
Views: 855
Reputation: 97
I think a way to gain more insight is to analyze the statistics of the number of bounding box sizes (small, medium, large) in your dataset. Here is a link for mAP calculation where the TF Object Detection API describes how small boxes and medium boxes are calculated.
I can imagine that a reason this issue is happening is that you have a much higher amount of medium size bounding boxes than your large size bounding boxes. Also, I would neglect the performance of the small bounding boxes since the mAP is less than 0.01.
Upvotes: 2