Ulrikop
Ulrikop

Reputation: 135

distinguishing objects with opencv

I want to identify lego bricks for building a lego sorting machine (I use c++ with opencv). That means I have to distinguish between objects which look very similar.

The bricks are coming to my camera individually on a flat conveyer. But they might lay in any possible way: upside down, on the side or "normal".

My approach is to teach the sorting machine the bricks by taping them with the camera in lots of different positions and rotations. Features of each and every view are calculated by surf-algorythm.

void calculateFeatures(const cv::Mat& image,
        std::vector<cv::KeyPoint>& keypoints,
        cv::Mat& descriptors)
{
  // detector == cv::SurfFeatureDetector(10)
  detector->detect(image,keypoints);
  // extractor == cv::SurfDescriptorExtractor()
  extractor->compute(image,keypoints,descriptors);
}

If there is an unknown brick (the brick that i want to sort) its features also get calculated and matched with known ones. To find wrongly matched features I proceed as described in the book OpenCV 2 Cookbook:

  1. with the matcher (=cv::BFMatcher(cv::NORM_L2)) the two nearest neighbours in both directions are searched

    matcher.knnMatch(descriptorsImage1, descriptorsImage2,
      matches1,
          2);
    matcher.knnMatch(descriptorsImage2, descriptorsImage1,
      matches2,
      2);
    
  2. I check the ratio between the distances of the found nearest neighbours. If the two distances are very similar it's likely that a false value is used.

    // loop for matches1 and matches2
    for(iterator matchIterator over all matches)
      if( ((*matchIterator)[0].distance / (*matchIterator)[1].distance) > 0.65 )
        throw away
    
  3. Finally only symmatrical match-pairs are accepted. These are matches in which not only n1 is the nearest neighbour to feature f1, but also f1 is the nearest neighbour to n1.

    for(iterator matchIterator1 over all matches)
      for(iterator matchIterator2 over all matches)
        if ((*matchIterator1)[0].queryIdx == (*matchIterator2)[0].trainIdx  &&
        (*matchIterator2)[0].queryIdx == (*matchIterator1)[0].trainIdx)
          // good Match
    

Now only pretty good matches remain. To filter out some more bad matches I check which matches fit the projection of img1 on img2 using the fundamental matrix.

std::vector<uchar> inliers(points1.size(),0);
cv::findFundamentalMat(
    cv::Mat(points1),cv::Mat(points2), // matching points
    inliers,
    CV_FM_RANSAC,
    3,
    0.99);

std::vector<cv::DMatch> goodMatches
// extract the surviving (inliers) matches
std::vector<uchar>::const_iterator itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator itM= allMatches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM)

  if (*itIn)
    // it is a valid match

good matches The result is pretty good. But in cases of extreme alikeness faults still occur.
In the picture above you can see that a similar brick is recognized well.

bad matches However in the second picture a wrong brick is recognized just as well.

Now the question is how I could improve the matching.

I had two different ideas:

All possible brick views

Upvotes: 6

Views: 3111

Answers (1)

jilles de wit
jilles de wit

Reputation: 7138

I don't have a complete answer, but I have a few suggestions.

On the image analysis side:

  • It looks like your camera setup is pretty constant. Easy to just separate the brick from the background. I also see your system finding features in the background. This is unnecessary. Set all non-brick pixels to black to remove them from the analysis.
  • When you have located just the brick, your first step should be to just filter likely candidates based on the size (i.e. number of pixels) in the brick. That way the example faulty match you show is already less likely.
  • You can take other features into account such as the aspect ratio of the bounding box of the brick, the major and minor axes (eigevectors of the covariance matrix of the central moments) of the brick etc.

These simpler features will give you a reasonable first filter to limit your search space.

On the mechanical side:

  • If bricks are actually coming down a conveyor you should be able to "straighten" the bricks along a straight edge using something like a rod that lies at an angle to the direction of the conveyor across the belt so that the bricks arrive more uniformly at your camera like so.
  • Similar to the previous point, you could use something like a very loose brush suspended across the belt to topple bricks standing up as they pass.

Again both these points will limit your search space.

Upvotes: 2

Related Questions