Ace Edith
Ace Edith

Reputation: 11

Does every occurence of an object needs to be labelled when annotating them for Yolo or image recognition in general?

I need to prepare a dataset for stacked books. Say in an image, there are 5 books but I only create bounding boxes for 4 of them. Will that affect the performance of my model in any way?

It is hard to create bounding boxes for stacked books when they are placed at a weird angle and I might've missed some of the books when drawing boxes because I got distracted by the number of lines. Plus the fact that the bounding boxes are perfectly flat on the axes and when the books are slanted, the bounding box for one book can take up to a few books. Is this bad practice? is it okay if I just left some of the books unboxed?

Lastly, if you train your model on just the book individually (not stacked) will they be able to be detected once they are stacked up and over half of the book is covered by other books?

Upvotes: 1

Views: 252

Answers (1)

Aaron Jones
Aaron Jones

Reputation: 1170

Although the bounding boxes might overlap, if its only books you want to detect, you should annotate and display all books as it can help increase the reliability of your dataset, especially if you are using a small number of images. However, you can always try and color splash the relevant pixels on your image, which is done in the Mask RCNN repo.

enter image description here

Below is the function that Mask RCNN uses in the visualization file.


def display_instances(image, boxes, masks, class_ids, class_names,
                      scores=None, title="",
                      figsize=(16, 16), ax=None,
                      show_mask=True, show_bbox=True,
                      colors=None, captions=None):
    """
    boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.
    masks: [height, width, num_instances]
    class_ids: [num_instances]
    class_names: list of class names of the dataset
    scores: (optional) confidence scores for each box
    title: (optional) Figure title
    show_mask, show_bbox: To show masks and bounding boxes or not
    figsize: (optional) the size of the image
    colors: (optional) An array or colors to use with each object
    captions: (optional) A list of strings to use as captions for each object
    """
    # Number of instances
    N = boxes.shape[0]
    if not N:
        print("\n*** No instances to display *** \n")
    else:
        assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]

    # If no axis is passed, create one and automatically call show()
    auto_show = False
    if not ax:
        _, ax = plt.subplots(1, figsize=figsize)
        auto_show = True

    # Generate random colors
    colors = colors or random_colors(N)

    # Show area outside image boundaries.
    height, width = image.shape[:2]
    ax.set_ylim(height + 10, -10)
    ax.set_xlim(-10, width + 10)
    ax.axis('off')
    ax.set_title(title)

    masked_image = image.astype(np.uint32).copy()
    for i in range(N):
        color = colors[i]

        # Bounding box
        if not np.any(boxes[i]):
            # Skip this instance. Has no bbox. Likely lost in image cropping.
            continue
        y1, x1, y2, x2 = boxes[i]
        if show_bbox:
            p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,
                                alpha=0.7, linestyle="dashed",
                                edgecolor=color, facecolor='none')
            ax.add_patch(p)

        # Label
        if not captions:
            class_id = class_ids[i]
            score = scores[i] if scores is not None else None
            label = class_names[class_id]
            caption = "{} {:.3f}".format(label, score) if score else label
        else:
            caption = captions[i]
        ax.text(x1, y1 + 8, caption,
                color='w', size=11, backgroundcolor="none")

        # Mask
        mask = masks[:, :, i]
        if show_mask:
            masked_image = apply_mask(masked_image, mask, color)

        # Mask Polygon
        # Pad to ensure proper polygons for masks that touch image edges.
        padded_mask = np.zeros(
            (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8)
        padded_mask[1:-1, 1:-1] = mask
        contours = find_contours(padded_mask, 0.5)
        for verts in contours:
            # Subtract the padding and flip (y, x) to (x, y)
            verts = np.fliplr(verts) - 1
            p = Polygon(verts, facecolor="none", edgecolor=color)
            ax.add_patch(p)
    ax.imshow(masked_image.astype(np.uint8))
    if auto_show:
        plt.show()

I don't know what type of network you are using, which could contribute to how well you can detect a book that is turned or flat if you only train on the top part of a book.

Upvotes: 1

Related Questions