OtterFamily
OtterFamily

Reputation: 803

How can I get confidence level from dlib.simple_object_detector

I have trained my own model to recognize a pattern, which i have as an .svm, and am supporting that by tracking motion when I have no good track. This is working great, but the issue I'm having is that sometimes I'm getting false positives that sometimes override a recently started motion track.

I tried logging the detector with dir() but couldn't find any confidence field, and checking the reference, but I couldn't find how to get this tracker to output its confidence level.

What I'd like to do is to basically have a threshold for the quality of track that the system is willing to use, which gradually lowers as time goes by. IE, if it loses the track of my pattern and immediately picks some random corner of the image to have a low-confidence track, it won't override my recent (and therefore quality) motion track. Whereas if the motion track has been running for a long time, it is more likely to have slid off, and i'm more inclined to trust a low-quality track.

TL;DR; How can I get the confidence level for this detector? Thanks so much for any help you might be able to give me.

I attached my code below to show context

import os
import sys
import glob
import dlib
import cv2


detector_path = sys.argv[1] #first point towards the detector to use
video_path = sys.argv[2] #then point it towards a folder filled with images. Doesn't need to be drawn from video


win = dlib.image_window()
files_to_test = os.listdir(video_path)

detector = dlib.simple_object_detector(detector_path)

tracker = None
tracker_age = 0
for file in files_to_test:
    img = dlib.load_rgb_image(video_path + "/" + file)
    dets = detector(img)
    print("Number of faces detected: {}".format(len(dets)))
    for k, d in enumerate(dets):
        print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
            k, d.left(), d.top(), d.right(), d.bottom()))
    if len(dets) > 0:
        tracker = dlib.correlation_tracker()
        d = dets[0]
        print(dir(d))
        x = d.left()
        y = d.top()
        x2 = d.right() 
        y2 = d.bottom()
        rect = dlib.rectangle(x,y,x2,y2)
        tracker.start_track(img,rect)
        tracker_age = 0
        win.clear_overlay()
        win.add_overlay(dets)
    else:
        print("relying on motion track for the past {} frames".format(tracker_age))
        if not tracker == None:
            tracker.update(img)
            pos = tracker.get_position()
            startX = int(pos.left())
            startY = int(pos.top())
            endX = int(pos.right())
            endY = int(pos.bottom())
            # draw the bounding box from the correlation object tracker
            cv2.rectangle(img, (startX, startY), (endX, endY),
                (0, 255, 0), 2)

    win.set_image(img)
    dlib.hit_enter_to_continue()

Upvotes: 1

Views: 1779

Answers (1)

zampnrs
zampnrs

Reputation: 365

Using dlib.simple_object_detector you won't get what you need, try the function bellow:

[boxes, confidences, detector_idxs] = dlib.fhog_object_detector.run_multiple(detectors, image, upsample_num_times=1, adjust_threshold=0.0)

See http://dlib.net/train_object_detector.py.html for more info

Upvotes: 2

Related Questions