Reputation: 1305
I created and merged an images SFrame with an Annotations SFrame. I have verified that the coordinates of the annotation boxes matches the location of the features measured in Photoshop. However the models I create are non-functional, so I explored the merged data set with
data['image_with_ground_truth'] =
tc.object_detector.util.draw_bounding_boxes(data['image'], data['annotations'])
and I find that all the annotations are squashed in the top-left corner in Turi Create despite them actually being widely distributed on the source image as in the second image. The annotations list column shows the coordinates get read correctly into TC, but are mapped badly into what the model sees as bounding boxes.
Where should I look to find the scaling problem in Turi Create??
Upvotes: 0
Views: 62
Reputation: 1305
the version of ml-annotate I was using output coordinates with different scale factors for each image in set, some close, some off by as much as 3.3x
Upvotes: 0