Barney Szabolcs
Barney Szabolcs

Reputation: 12514

How to get extra information of blobs with SimpleBlobDetector?

@robot_sherrick answered me this question, this is a follow-up question for his answer.

cv::SimpleBlobDetector in Opencv 2.4 looks very exciting but I am not sure I can make it work for more detailed data extraction.

I have the following concerns:

Upvotes: 2

Views: 19851

Answers (5)

William L
William L

Reputation: 1

For those who still need answers for this problem, getBlobContours will return a list of all contours and you can use the logic from the SimpleBlobDetectorImpl::findBlobs to get the area, convexity, etc.

detector = cv2.SimpleBlobDetector_create()

# Detect keypoints and get contours
keypoints = detector.detect(frame)
contours = detector.getBlobContours()

# Loop through contours to display keypoint contours in green and rejected contours in red
keypointContours = []
rejectedContours = []

for currentContour in contours:
    found = False
    for currentKeypoint in keypoints:
        result = cv2.pointPolygonTest(currentContour, currentKeypoint.pt, False)

        if result == 1.0:
            keypointContours.append(currentContour)
            found = True
            break;

    if not found:
        rejectedContours.append(currentContour)

displayFrame = cv2.drawContours(displayFrame, keypointContours, -1, colorGreen, 2, cv2.LINE_8)
displayFrame = cv2.drawContours(displayFrame, rejectedContours, -1, colorRed, 2, cv2.LINE_8)

# Display the resulting frame
cv2.imshow("Output", displayFrame)

Upvotes: 0

steve
steve

Reputation: 1169

I converted Joel's C++ to Java (since I'm working in Java). That is not an easy task since the OpenCV Java version is more than a little different from the C++ version.

I added what I wanted and what I think the OP is looking for: result values of each blob that are used for selection criteria. Not only do I get a list of blobs matching the criteria, but I can know what the values are for each blob.

The code does work ... except for convexity. It crashes hard for some reason.

I wasn't able to extend the SimpleBlobDetector. Might be possible, but I don't see much value in that. I did extend the param class which is not much value since it's a simple class of config values. But I used it as that is what Joel did and it does force the implementation to be similar to the SBD.

There are some utility classes/method that are not here, and I leave as an exercise. They are relatively easy things like calculating the hypotenuse of a right triangle.

package com.papapill.vision.detection;

import static com.papapill.utility.OpenCvUtil.getDistance;
import static com.papapill.utility.OpenCvUtil.roundToInt;

import androidx.annotation.NonNull;

import com.papapill.support.ImageLocation;

import org.opencv.core.KeyPoint;
import org.opencv.core.Mat;
import org.opencv.core.MatOfInt;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.features2d.SimpleBlobDetector;
import org.opencv.features2d.SimpleBlobDetector_Params;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgproc.Moments;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

/**
 * Provides similar capability as {@link SimpleBlobDetector} with richer results --
 * properties of each blob that were used to select/filter.
 *
 * @implNote
 * <a href="https://stackoverflow.com/questions/13534723/how-to-get-extra-information-of-blobs-with-simpleblobdetector">See</a>
 * <p>
 * Example code passes Point to norm() to get what it calls 'dist' (which I assume is distance),
 * but in Java, norm() requires a Mat and I don't know how to convert a Point to a Mat :(
 * Computing distance is easy enough using Pythagorean theorem; and probably faster than norm().
 * <a href="https://stackoverflow.com/questions/38365900/using-opencv-norm-function-to-get-euclidean-distance-of-two-points">See</a>
 */
public final class BlobDetector {
    /**
     * Extends {@link SimpleBlobDetector} params class to include criteria of the C++ OpenV,
     * but missing from the Java version.
     */
    public static final class Criteria extends SimpleBlobDetector_Params {
        /**
         * Blob color filter.
         * See {@link SimpleBlobDetector}
         *
         * @implNote
         * This criteria seems to be excluded from the Java interface since the value is declared
         * in C++ as uchar which is not supposed in Java. It's just a byte.
         */
        private byte blobColor;
        public byte get_blobColor() {
            return blobColor;
        }
        public void set_blobColor(byte to) {
            blobColor = to;
        }
    }

    /**
     * Describes a blob.
     * Fields are immutable except for contour and keypoint since OpenCV types are mutable.
     */
    public static final class Blob {
        public Blob(
                @NonNull MatOfPoint contour,
                @NonNull KeyPoint keyPoint,
                @NonNull ImageLocation center,
                double area,
                double circularity,
                double inertia,
                double convexity,
                double size,
                byte color) {
            this.contour = contour;
            this.keyPoint = keyPoint;
            this.center = center;
            this.size = size;
            this.area = area;
            this.circularity = circularity;
            this.inertia = inertia;
            this.convexity = convexity;
            this.color = color;
        }

        /**
         * Contour representation.
         */
        public final MatOfPoint contour;

        /**
         * KeyPoint representation.
         */
        public final KeyPoint keyPoint;

        /**
         * Area in square pixels.
         */
        public final double area;

        /**
         * Point of the image that is the center of the shape.
         */
        public final ImageLocation center;

        /**
         * Color.
         */
        public final byte color;

        /**
         * Represents how convex the shape is from 0 to 1 where 1 is completely convex.
         */
        public final double convexity;

        /**
         * Represents how circular the shape is from 0 to 1 where 1 is a circle.
         */
        public final double circularity;

        /**
         * Represents how much the shape is close-loop like from 0 (line) to 1 (close-loop).
         */
        public final double inertia;

        /**
         * Size a la simple blob detector which is not intended to be precise, but can be helpful
         * to compare relative size of blobs.
         */
        public final double size;
    }

    /**
     * Internal class for collecting blob info.
     */
    private static final class BlobInfo {
        public BlobInfo(MatOfPoint contour) {
            this.contour = contour;
        }

        public final MatOfPoint contour;
        public double confidence = 1;
        public double area;
        public double circularity;
        public double inertia;
        public double convexity;
        public byte color;
        public Point center;
        public double size;

        public Blob build(Point keyPointLocation) {
            var middleBlobInfo = this;
            ImageLocation center = ImageLocation.fromCvPoint(middleBlobInfo.center);
            double radius = middleBlobInfo.size;
            MatOfPoint contour = middleBlobInfo.contour;
            KeyPoint keyPoint = new KeyPoint((float)keyPointLocation.x, (float)keyPointLocation.y, (float) radius);
            return new Blob(
                    contour,
                    keyPoint,
                    center,
                    radius,
                    area,
                    circularity,
                    inertia,
                    convexity,
                    color);
        }
    }

    public BlobDetector() {
        this.criteria = new Criteria();
    }

    private Criteria criteria;

    public Criteria getCriteria() {
        return criteria;
    }
    
    public void setCriteria(@NonNull Criteria to) {
        criteria = to;
    }

    public List<Blob> detect(@NonNull Mat image) {
        var blobs = new ArrayList<Blob>();

        // convert to grayscale if not already
        Mat grayscaleImage;
        if (image.channels() == 3) {
            grayscaleImage = new Mat();
            Imgproc.cvtColor(image, grayscaleImage, Imgproc.COLOR_BGR2GRAY);
        } else {
            grayscaleImage = image;
        }

        List<List<BlobInfo>> blobsBySameThreshold = detectForEachThreshold(grayscaleImage);

        // select blobs by repeatability -- number of times found over set of thresholds
        for (var blobsForSomeThreshold : blobsBySameThreshold) {
            if (blobsForSomeThreshold.size() >= criteria.get_minRepeatability()) {
                var sumPoint = new Point(0, 0);
                double normalizer = 0;
                for (var blobForSomeThreshold : blobsForSomeThreshold) {
                    sumPoint = PointUtil.plus(sumPoint, PointUtil.times(blobForSomeThreshold.center, blobForSomeThreshold.confidence));
                    normalizer += blobForSomeThreshold.confidence;
                }
                sumPoint = PointUtil.times(sumPoint, 1.0 / normalizer);
                BlobInfo middleBlobInfo = blobsForSomeThreshold.get(blobsForSomeThreshold.size() / 2);
                blobs.add(middleBlobInfo.build(sumPoint));
            }
        }

        return blobs;
    }

    /**
     * Returns blob-info for each contour that satisfies the selection criteria -- for each
     * threshold specified in the selection criteria.
     */
    private List<List<BlobInfo>> detectForEachThreshold(Mat grayscaleImage) {
        List<List<BlobInfo>> blobsBySameThreshold = new ArrayList<>();
        for (double thresh = criteria.get_minThreshold(); thresh < criteria.get_maxThreshold(); thresh += criteria.get_thresholdStep()) {
            Mat monochromeImage = new Mat();
            Imgproc.threshold(grayscaleImage, monochromeImage, thresh, 255, Imgproc.THRESH_BINARY);
            List<BlobInfo> blobsForThreshold = detectAndSelectBlobs(monochromeImage);
            List<List<BlobInfo>> newBlobs = new ArrayList<>();
            for (var blob : blobsForThreshold) {
                boolean isNew = true;
                for (var blobsForSomeOtherThreshold : blobsBySameThreshold) {
                    double distance = getDistance(blobsForSomeOtherThreshold.get(blobsForSomeOtherThreshold.size() / 2).center, blob.center);
                    isNew =
                            distance >= criteria.get_minDistBetweenBlobs() &&
                                    distance >= blobsForSomeOtherThreshold.get(blobsForSomeOtherThreshold.size() / 2).size &&
                                    distance >= blob.size;
                    if (!isNew) {
                        blobsForSomeOtherThreshold.add(blob);
                        int k = blobsForSomeOtherThreshold.size() - 1;
                        while (k > 0 && blobsForSomeOtherThreshold.get(k).size < blobsForSomeOtherThreshold.get(k-1).size) {
                            blobsForSomeOtherThreshold.set(k, blobsForSomeOtherThreshold.get(k-1));
                            k--;
                        }
                        blobsForSomeOtherThreshold.set(k, blob);
                        break;
                    }
                }
                if (isNew) {
                    var item = new ArrayList<BlobInfo>();
                    item.add(blob);
                    newBlobs.add(item);
                }
            }
            blobsBySameThreshold.addAll(newBlobs);
        }
        return blobsBySameThreshold;
    }

    /**
     * Finds contours in the image and adds a blob-info for each that satisfies the selection criteria.
     */
    private List<BlobInfo> detectAndSelectBlobs(Mat monochromeImage) {
        var blobInfos = new ArrayList<BlobInfo>();
        List<MatOfPoint> contours = new ArrayList<>();
        Mat tmpMonochromeImage = monochromeImage.clone(); // why clone??
        Imgproc.findContours(tmpMonochromeImage, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_NONE);
        for (var contour : contours) {
            BlobInfo blobInfo = new BlobInfo(contour);
            if (addBlobInfoIfSelected(blobInfo, contour, monochromeImage)) {
                blobInfos.add(blobInfo);
            }
        }
        return blobInfos;
    }

    /**
     * Calculates the attributes of a contour, caches them into blob-info and returns true if all
     * attributes satisfy the selection criteria.
     * If not selected, then returns false when some of the blob-info attributes may not be loaded.
     */
    private boolean addBlobInfoIfSelected(BlobInfo blobInfo, MatOfPoint contour, Mat monochromeImage) {
        Moments moments = Imgproc.moments(contour);

        // filter on area
        if (criteria.get_filterByArea()) {
            double area = moments.m00;
            if (area < criteria.get_minArea() || area >= criteria.get_maxArea()) {
                return false;
            }
            blobInfo.area = area;
        }

        // filter on circularity
        if (criteria.get_filterByCircularity()) {
            double area = moments.m00;
            MatOfPoint2f contoursMatOfPoint2f = new MatOfPoint2f();
            contoursMatOfPoint2f.fromArray(contour.toArray());
            double perimeter = Imgproc.arcLength(contoursMatOfPoint2f, true);
            double circularity = 4 * Math.PI * area / (perimeter * perimeter);
            if (circularity < criteria.get_minCircularity() || circularity >= criteria.get_maxCircularity()) {
                return false;
            }
            blobInfo.circularity = circularity;
        }

        // filter on inertia
        if (criteria.get_filterByInertia()) {
            double denominator = Math.sqrt(Math.pow(2 * moments.mu11, 2) + Math.pow(moments.mu20 - moments.mu02, 2));
            double eps = 1e-2;
            double inertia;
            if (denominator > eps) {
                double cosmin = (moments.mu20 - moments.mu02) / denominator;
                double sinmin = 2 * moments.mu11 / denominator;
                double cosmax = -cosmin;
                double sinmax = -sinmin;

                double imin = 0.5 * (moments.mu20 + moments.mu02) - 0.5 * (moments.mu20 - moments.mu02) * cosmin - moments.mu11 * sinmin;
                double imax = 0.5 * (moments.mu20 + moments.mu02) - 0.5 * (moments.mu20 - moments.mu02) * cosmax - moments.mu11 * sinmax;
                inertia = imin / imax;
            } else {
                inertia = 1;
            }

            if (inertia < criteria.get_minInertiaRatio() || inertia >= criteria.get_maxInertiaRatio()) {
                return false;
            }

            blobInfo.confidence = inertia * inertia;
            blobInfo.inertia = inertia;
        }

        // filter on convexity
        if (criteria.get_filterByConvexity()) {
            MatOfInt hull = new MatOfInt();
            Imgproc.convexHull(contour, hull);
            double area = Imgproc.contourArea(contour);
            double hullArea = Imgproc.contourArea(hull);
            double convexity = area / hullArea;
            if (convexity < criteria.get_minConvexity() || convexity >= criteria.get_maxConvexity()) {
                return false;
            }
            blobInfo.convexity = convexity;
        }

        blobInfo.center = new Point(moments.m10 / moments.m00, moments.m01 / moments.m00);

        // filter on color
        if (criteria.get_filterByColor()) {
            byte blobColor = criteria.get_blobColor();
            Mat.Atable<Byte> pixel = monochromeImage.at(byte.class, roundToInt(blobInfo.center.y), roundToInt(blobInfo.center.x));
            byte pixelColor = pixel.getV();
            if (pixelColor != blobColor) {
                return false;
            }
            blobInfo.color = pixelColor;
        }

        // calculate a size
        {
            List<Double> distances = new ArrayList<>();
            Point[] contourPoints = contour.toArray();
            for (Point contourPoint : contourPoints) {
                double distance = getDistance(blobInfo.center, contourPoint);
                distances.add(distance);
            }
            Collections.sort(distances);
            double medianA = distances.get((distances.size() - 1) / 2);
            double medianB = distances.get(distances.size() / 2);
            double average = (medianA + medianB) / 2.0;
            blobInfo.size = average;
        }

        return true;
    }
}

Upvotes: 0

thealmightygrant
thealmightygrant

Reputation: 701

So the code should look something like this:

cv::Mat inputImg = imread(image_file_name, CV_LOAD_IMAGE_COLOR);   // Read a file
cv::SimpleBlobDetector::Params params; 
params.minDistBetweenBlobs = 10.0;  // minimum 10 pixels between blobs
params.filterByArea = true;         // filter my blobs by area of blob
params.minArea = 20.0;              // min 20 pixels squared
params.maxArea = 500.0;             // max 500 pixels squared
SimpleBlobDetector myBlobDetector(params);
std::vector<cv::KeyPoint> myBlobs;
myBlobDetector.detect(inputImg, myBlobs);

If you then want to have these keypoints highlighted on your image:

cv::Mat blobImg;    
cv::drawKeypoints(inputImg, myBlobs, blobImg);
cv::imshow("Blobs", blobImg);

To access the info in the keypoints, you then just access each element like so:

for(std::vector<cv::KeyPoint>::iterator blobIterator = myBlobs.begin(); blobIterator != myBlobs.end(); blobIterator++){
   std::cout << "size of blob is: " << blobIterator->size << std::endl;
   std::cout << "point is at: " << blobIterator->pt.x << " " << blobIterator->pt.y << std::endl;
} 

Note: this has not been compiled and may have typos.

Upvotes: 11

Joel Teply
Joel Teply

Reputation: 3296

Here is a version that will allow you to get the last contours back, via the getContours() method. They will match up by index to the keypoints.

class BetterBlobDetector : public cv::SimpleBlobDetector
{
public:

    BetterBlobDetector(const cv::SimpleBlobDetector::Params &parameters = cv::SimpleBlobDetector::Params());

    const std::vector < std::vector<cv::Point> > getContours();

protected:
    virtual void detectImpl( const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, const cv::Mat& mask=cv::Mat()) const;
    virtual void findBlobs(const cv::Mat &image, const cv::Mat &binaryImage,
                           std::vector<Center> &centers, std::vector < std::vector<cv::Point> >&contours) const;

};

Then cpp

using namespace cv;

BetterBlobDetector::BetterBlobDetector(const SimpleBlobDetector::Params &parameters)
{

}

void BetterBlobDetector::findBlobs(const cv::Mat &image, const cv::Mat &binaryImage,
                                   vector<Center> &centers, std::vector < std::vector<cv::Point> >&curContours) const
{
    (void)image;
    centers.clear();

    curContours.clear();

    std::vector < std::vector<cv::Point> >contours;
    Mat tmpBinaryImage = binaryImage.clone();
    findContours(tmpBinaryImage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);


    for (size_t contourIdx = 0; contourIdx < contours.size(); contourIdx++)
    {
        Center center;
        center.confidence = 1;
        Moments moms = moments(Mat(contours[contourIdx]));
        if (params.filterByArea)
        {
            double area = moms.m00;
            if (area < params.minArea || area >= params.maxArea)
                continue;
        }

        if (params.filterByCircularity)
        {
            double area = moms.m00;
            double perimeter = arcLength(Mat(contours[contourIdx]), true);
            double ratio = 4 * CV_PI * area / (perimeter * perimeter);
            if (ratio < params.minCircularity || ratio >= params.maxCircularity)
                continue;
        }

        if (params.filterByInertia)
        {
            double denominator = sqrt(pow(2 * moms.mu11, 2) + pow(moms.mu20 - moms.mu02, 2));
            const double eps = 1e-2;
            double ratio;
            if (denominator > eps)
            {
                double cosmin = (moms.mu20 - moms.mu02) / denominator;
                double sinmin = 2 * moms.mu11 / denominator;
                double cosmax = -cosmin;
                double sinmax = -sinmin;

                double imin = 0.5 * (moms.mu20 + moms.mu02) - 0.5 * (moms.mu20 - moms.mu02) * cosmin - moms.mu11 * sinmin;
                double imax = 0.5 * (moms.mu20 + moms.mu02) - 0.5 * (moms.mu20 - moms.mu02) * cosmax - moms.mu11 * sinmax;
                ratio = imin / imax;
            }
            else
            {
                ratio = 1;
            }

            if (ratio < params.minInertiaRatio || ratio >= params.maxInertiaRatio)
                continue;

            center.confidence = ratio * ratio;
        }

        if (params.filterByConvexity)
        {
            vector < Point > hull;
            convexHull(Mat(contours[contourIdx]), hull);
            double area = contourArea(Mat(contours[contourIdx]));
            double hullArea = contourArea(Mat(hull));
            double ratio = area / hullArea;
            if (ratio < params.minConvexity || ratio >= params.maxConvexity)
                continue;
        }

        center.location = Point2d(moms.m10 / moms.m00, moms.m01 / moms.m00);

        if (params.filterByColor)
        {
            if (binaryImage.at<uchar> (cvRound(center.location.y), cvRound(center.location.x)) != params.blobColor)
                continue;
        }

        //compute blob radius
        {
            vector<double> dists;
            for (size_t pointIdx = 0; pointIdx < contours[contourIdx].size(); pointIdx++)
            {
                Point2d pt = contours[contourIdx][pointIdx];
                dists.push_back(norm(center.location - pt));
            }
            std::sort(dists.begin(), dists.end());
            center.radius = (dists[(dists.size() - 1) / 2] + dists[dists.size() / 2]) / 2.;
        }

        centers.push_back(center);
        curContours.push_back(contours[contourIdx]);    
}

static std::vector < std::vector<cv::Point> > _contours;

const std::vector < std::vector<cv::Point> > BetterBlobDetector::getContours() {
    return _contours;
}

void BetterBlobDetector::detectImpl(const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, const cv::Mat&) const
{
    //TODO: support mask
     _contours.clear();

    keypoints.clear();
    Mat grayscaleImage;
    if (image.channels() == 3)
        cvtColor(image, grayscaleImage, CV_BGR2GRAY);
    else
        grayscaleImage = image;

    vector < vector<Center> > centers;
    vector < vector<cv::Point> >contours;
    for (double thresh = params.minThreshold; thresh < params.maxThreshold; thresh += params.thresholdStep)
    {
        Mat binarizedImage;
        threshold(grayscaleImage, binarizedImage, thresh, 255, THRESH_BINARY);

        vector < Center > curCenters;
        vector < vector<cv::Point> >curContours, newContours;
        findBlobs(grayscaleImage, binarizedImage, curCenters, curContours);
        vector < vector<Center> > newCenters;
        for (size_t i = 0; i < curCenters.size(); i++)
        {

            bool isNew = true;
            for (size_t j = 0; j < centers.size(); j++)
            {
                double dist = norm(centers[j][ centers[j].size() / 2 ].location - curCenters[i].location);
                isNew = dist >= params.minDistBetweenBlobs && dist >= centers[j][ centers[j].size() / 2 ].radius && dist >= curCenters[i].radius;
                if (!isNew)
                {
                    centers[j].push_back(curCenters[i]);

                    size_t k = centers[j].size() - 1;
                    while( k > 0 && centers[j][k].radius < centers[j][k-1].radius )
                    {
                        centers[j][k] = centers[j][k-1];
                        k--;
                    }
                    centers[j][k] = curCenters[i];

                    break;
                }
            }
            if (isNew)
            {
                newCenters.push_back(vector<Center> (1, curCenters[i]));
                newContours.push_back(curContours[i]);
                //centers.push_back(vector<Center> (1, curCenters[i]));
            }
        }
        std::copy(newCenters.begin(), newCenters.end(), std::back_inserter(centers));
        std::copy(newContours.begin(), newContours.end(), std::back_inserter(contours));
    }

    for (size_t i = 0; i < centers.size(); i++)
    {
        if (centers[i].size() < params.minRepeatability)
            continue;
        Point2d sumPoint(0, 0);
        double normalizer = 0;
        for (size_t j = 0; j < centers[i].size(); j++)
        {
            sumPoint += centers[i][j].confidence * centers[i][j].location;
            normalizer += centers[i][j].confidence;
        }
        sumPoint *= (1. / normalizer);
        KeyPoint kpt(sumPoint, (float)(centers[i][centers[i].size() / 2].radius));
        keypoints.push_back(kpt);
        _contours.push_back(contours[i]);
    }
}

Upvotes: 5

mehmetdagli
mehmetdagli

Reputation: 1

//Access SimpleBlobDetector datas for video

#include "opencv2/imgproc/imgproc.hpp" // 
#include "opencv2/highgui/highgui.hpp"

    #include <iostream>
    #include <math.h>
    #include <vector>
    #include <fstream>
    #include <string>
    #include <sstream>
    #include <algorithm>

#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/features2d/features2d.hpp"


using namespace cv;
using namespace std;


int main(int argc, char *argv[])
{


    const char* fileName ="C:/Users/DAGLI/Desktop/videos/new/m3.avi";  
    VideoCapture cap(fileName); //
    if(!cap.isOpened()) //
    {
        cout << "Couldn't open Video  " << fileName << "\n"; 
        return -1; 
    }
    for(;;)  // videonun frameleri icin sonsuz dongu
    {
        Mat frame,labelImg; 
        cap >> frame; 
        if(frame.empty()) break;  
        //imshow("main",frame);  

        Mat frame_gray;
        cvtColor(frame,frame_gray,CV_RGB2GRAY);


        //////////////////////////////////////////////////////////////////////////
        // convert binary_image
        Mat binaryx;
        threshold(frame_gray,binaryx,120,255,CV_THRESH_BINARY);


        Mat src, gray, thresh, binary;
        Mat out;
        vector<KeyPoint> keyPoints;

        SimpleBlobDetector::Params params;
        params.minThreshold = 120;
        params.maxThreshold = 255;
        params.thresholdStep = 100;

        params.minArea = 20; 
        params.minConvexity = 0.3;
        params.minInertiaRatio = 0.01;

        params.maxArea = 1000;
        params.maxConvexity = 10;

        params.filterByColor = false;
        params.filterByCircularity = false;



        src = binaryx.clone();

        SimpleBlobDetector blobDetector( params );
        blobDetector.create("SimpleBlob");



        blobDetector.detect( src, keyPoints );
        drawKeypoints( src, keyPoints, out, CV_RGB(255,0,0), DrawMatchesFlags::DEFAULT);


        cv::Mat blobImg;    
        cv::drawKeypoints(frame, keyPoints, blobImg);
        cv::imshow("Blobs", blobImg);

        for(int i=0; i<keyPoints.size(); i++){
            //circle(out, keyPoints[i].pt, 20, cvScalar(255,0,0), 10);
            //cout<<keyPoints[i].response<<endl;
            //cout<<keyPoints[i].angle<<endl;
            //cout<<keyPoints[i].size()<<endl;
            cout<<keyPoints[i].pt.x<<endl;
            cout<<keyPoints[i].pt.y<<endl;

        }
        imshow( "out", out );

        if ((cvWaitKey(40)&0xff)==27) break;  // esc 'ye basilinca break
    }
    system("pause");

}

Upvotes: 0

Related Questions