Reputation: 57
I am new to computer vision and image recognition. For my first CV project I am developing a tool that detects apples (the fruit) in images.
What I have so far:
I developed a convolution neural net in Python using tensorflow that determines whether something is an apple or not. The drawback is that my CNN only works on images where the apple is the only object in the image. My training data set looks something like:
What I want to achieve: I would like to be able to detect an apple in an image and put a border around them. The images however would be full of other objects like in this image of a picnic:
Possible approaches:
Sliding Window: I would break my photo down into smaller image. I would start with a large window size in the top left corner and move to right by a step size. When I get to right border of the image I would move down a certain amount of pixels and repeat. This is effectively a sliding window and every one of these smaller images would be run through my CNN.
The window size would get smaller and smaller until an apple is found. The downside of this is that I would be running hundreds of smaller images through my CNN which would take a long time to detect an apple. Additionally if there isn't an apple present in the image, a lot of time would be wasted for nothing.
Extracting foreground objects: Another approach could be to extract all the foreground elements from an image (using OpenCV maybe?) and running those objects through my CNN.
Compared to the sliding window approach, I would be running a handful of images through my CNN vs. hundreds of images.
These are two approaches I could think of, but I was wondering if there are better ones out there in terms of speed. The sliding window approach would eventually work, but it will take a really long time to get the border window of the apple.
I would really appreciate if someone could give me some guidance (maybe I'm on a completely wrong track?), a link to some reading material or some examples of code for extracting foreground elements. Thanks!
Upvotes: 0
Views: 733
Reputation: 3408
A better way to do this is to use the Single Shot Multibox detector (SSD), or "You Only Look Once" (YOLO). Until this approach was designed, it was common to detect objects the way you suggest in the question.
There is a python implementation of SSD is here. OpenCV is used in the YOLO implementation. You can train the networks anew for Apples, in case the current versions do not detect them, or your project requires you to build a system from scratch.
Upvotes: 2