Craagson
Craagson

Reputation: 1

Comparing an Image to a background image to find an object using OpenCV with Java

I have a project where I am trying to track a person as they move throughout a room. I am using an arduino, some servo motors and an xbox kinect for my camera.

I have a vision of allowing the project some training time where it can scan the room and make a database of images for the empty room. Then when a person enters the room the program can do a simple difference image to create a white blob for the person. Using this white blob I would be able to calculate the centre of mass for the person and compare it to the centre of the image frame in order to pass a command to the arduino telling it how far and in which direction to move the servo motors. I am using eclipse, writing in java and using opencv 2.4.6.

I am stuck on getting a clear white blob. I have already written my methods to calculate the distance from the centre of mass of the blob and the centre of the frame but without a clearly defined blob this is useless. I have been trying to get my program to work by taking a snap shot of the background of my room, changing the image to binary then subtracting it from a binary image of my room with me in it. This has not worked. Is my vision of training the system then comparing with these trained images valid or should I be going about a different way to detect an object?

I have tried implementing opticalflow() but it seems erratic and not extremely accurate.

Any information on the topic would be extremely helpful. I thank you in advance for reading my question.

-Trent

Edit: I have attached my code. The area in question is the training() and matdiff() methods.

package testingV1;

//OpenCv + OpenNI + Java Libraries
import java.awt.FlowLayout;
import java.util.ArrayList;
import java.util.List;
import java.awt.image.BufferedImage;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferByte;
import java.io.*;
import java.nio.ByteBuffer;

import javax.imageio.ImageIO;
import javax.swing.*;

import org.opencv.core.*;
import org.opencv.imgproc.*;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.video.BackgroundSubtractorMOG;
import org.opencv.video.Video;
import org.opencv.highgui.*;
import org.opencv.*;
import org.OpenNI.*;

public class TestV1 {
    static int imWidth = 640, imHeight = 480;
    static ImageGenerator imageGen;
    static Context context;
    static int flag = CvType.CV_8UC3;
    static int flag2 = CvType.CV_8UC1;
    static Mat background;

    public static void main(String[] args) throws GeneralException{
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);

        //We create a new "context" of the Kinect
        context = new Context(); 
        JFrame canvas = new JFrame("Optical Flow");

        //need to create and add license to our "context"
        License license = new License("PrimeSense", "0KOIk2JeIBYClPWVnMoRKn5cdY4=");
        context.addLicense(license); 

        //defining the data we are taking from the kinect
        MapOutputMode mapMode = null; //initialize it to null
        mapMode = new MapOutputMode(imWidth, imHeight, 30); //create a 640x480 30fps feed definition

        imageGen = ImageGenerator.create(context); //Rgb camera
        imageGen.setMapOutputMode(mapMode); //change our feed to 640x480 30 fps
        imageGen.setPixelFormat(PixelFormat.RGB24);///Pixel format, RGB 8-bit 3 channel 


        context.setGlobalMirror(true); //Mirrors our feed to make it more intuitive

        BufferedImage rgbImage = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_INT_RGB);
        BufferedImage prevImg = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_BYTE_GRAY);
        BufferedImage currImg = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_BYTE_GRAY);
        BufferedImage diffImg = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_BYTE_GRAY);
        BufferedImage paintedImg = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_INT_RGB);
        BufferedImage facesImg = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_INT_RGB);
        Mat paintedMat = new Mat(imHeight, imWidth, flag);
        Mat facesMat = new Mat(imHeight, imWidth, flag);
        Mat currMat = new Mat(imHeight, imWidth, flag2);
        Mat prevMat = new Mat(imHeight, imWidth, flag2);
        Mat diffMat = new Mat(imHeight, imWidth, flag2);    
        Mat paintedMatg = new Mat(imHeight, imWidth, flag2);
        ByteBuffer imageBB;

        //First Frame
        canvas.getContentPane().setLayout(new FlowLayout());
        Icon video = new ImageIcon(rgbImage);
        JLabel panel = new JLabel(video);
        //Icon video2 = new ImageIcon(paintedImg);
        //JLabel panel2 = new JLabel(video2);
        //Icon video3 = new ImageIcon(facesImg);
        //JLabel panel3 = new JLabel(video3);
        Icon video4 = new ImageIcon(diffImg);
        JLabel panel4 = new JLabel(video4);
        canvas.getContentPane().add(panel);
        //canvas.getContentPane().add(panel2);
        //canvas.getContentPane().add(panel3);
        canvas.getContentPane().add(panel4);
        canvas.pack();
        canvas.setVisible(true);
        canvas.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);

        CascadeClassifier faceDetectorAlg = new CascadeClassifier("C:/Users/Trent/Desktop/Capstone"
                + "/ComputerVisionCode/November16/testingV1/src/testingV1/haarcascade_frontalface_alt.xml");

        boolean firstTime = true;

        imageGen.startGenerating();
        while(true){
            context.waitOneUpdateAll(imageGen);
            imageBB = imageGen.getImageMap().createByteBuffer(); //get KinectData
            rgbImage = bufToImage(imageBB);  //take data from kinect and put in BufferedImage
            prevMat = currMat;
            currMat = img2Mat(rgb2Gray(rgbImage));

            if(firstTime){
                training(rgbImage);
                firstTime = false;
            }
            else{

                diffMat = findDiff(currMat);
                diffImg = mat2Img(diffMat);
            }

            //optical flow - inaccurate
            //paintedMatg = opticalFlow(img2Mat(prevImg), img2Mat(currImg), 300, 0.01, 10);
            //Imgproc.cvtColor(paintedMatg, paintedMat, Imgproc.COLOR_GRAY2RGB); //change from gray to color
            //paintedImg = mat2Img(paintedMat);

            //face detection - extremely resource intensive
            //facesMat = faceDetector(img2Mat(rgbImage), faceDetectorAlg);
            //facesImg = mat2Img(facesMat);

            panel.setIcon(new ImageIcon(rgbImage));
            //panel2.setIcon(new ImageIcon(paintedImg));
            //panel3.setIcon(new ImageIcon(facesImg));
            panel4.setIcon(new ImageIcon(diffImg));
            canvas.repaint();
            canvas.revalidate();
        }
    }

    //establishes a background for better diff images
    private static void training(BufferedImage in){
        background = new Mat(imHeight, imWidth, flag2);
        background = img2Mat(rgb2Gray(in));

        System.out.println("Training Complete");
    }

    private static Mat findDiff(Mat in){
        Mat output = new Mat(imHeight, imWidth, flag2);

        Core.absdiff(background, in, output);
        Imgproc.threshold(output, output, 20, 255, Imgproc.THRESH_BINARY);

        return output;
    }

    //Face Detection
    private static Mat faceDetector(Mat in, CascadeClassifier Alg){
        Mat output = in;
        MatOfRect faceDetections = new MatOfRect();

        if(Alg.empty()){
            System.out.println("didnt load");
            return output;
        }

        Alg.detectMultiScale(in, faceDetections);

        for(Rect rect : faceDetections.toArray()){
            Core.rectangle(output,  new Point(rect.x, rect.y), 
                    new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(0, 255, 0), 2);
        }

        return output;
    }

    //Returns an image with vectors painted to show movement.
    private static Mat opticalFlow(Mat curr, Mat prev, int maxDetectionCount, double qualityLevel, double minDistance){
        List<MatOfPoint2f> trackedPoints = new ArrayList<MatOfPoint2f>();
        MatOfPoint initial = new MatOfPoint();
        MatOfFloat err = new MatOfFloat();
        MatOfByte status = new MatOfByte();
        MatOfPoint2f initial2f = new MatOfPoint2f();
        MatOfPoint2f next2f = new MatOfPoint2f();

        double[] temp;
        Point p1 = new Point();
        Point p2 = new Point();
        Mat output = new Mat(imHeight, imWidth, flag);

        Scalar red = new Scalar(255, 0, 0);

        //Finds Tracking points
        if(trackedPoints.size() < 1){
            Imgproc.goodFeaturesToTrack(curr, initial, maxDetectionCount, qualityLevel, minDistance);
            initial.convertTo(initial2f, CvType.CV_32FC2);
            trackedPoints.add(initial2f);
        }

        //catches first time frame
        if(prev.empty())
            curr.copyTo(prev);

        //find points in current image
        if(trackedPoints.get(0).total() > 0){
            Video.calcOpticalFlowPyrLK(prev, curr, trackedPoints.get(0), next2f, status, err);
            trackedPoints.add(next2f);
        }

        output = curr;

        //draw red lines
        for(int i = 0; i < trackedPoints.get(0).cols(); i++){
            for(int j = 0; j < trackedPoints.get(0).rows(); j++){
                temp  = trackedPoints.get(0).get(j, i);
                p1.set(temp);
                temp = trackedPoints.get(1).get(j, i);
                p2.set(temp);
                Core.line(output, p1, p2, red);
            }
        }

        return output;
    }

    //Returns a vector to indicate how the magnitude of movement.
    private static double[] opticalFlowAnalysis(Mat curr, Mat prev, int maxDetectionCount, double qualityLevel, double minDistance){
        List<MatOfPoint2f> trackedPoints = new ArrayList<MatOfPoint2f>();
        MatOfPoint initial = new MatOfPoint();
        MatOfFloat err = new MatOfFloat();
        MatOfByte status = new MatOfByte();
        MatOfPoint2f initial2f = new MatOfPoint2f();
        MatOfPoint2f next2f = new MatOfPoint2f();

        double[] total = new double[2];
        total[0] = 0;
        total[1] = 0;
        double[] point1;
        double[] point2;

        double[] output = new double[2];

        //Finds Tracking points
        if(trackedPoints.size() < 1){
            Imgproc.goodFeaturesToTrack(curr, initial, maxDetectionCount, qualityLevel, minDistance);
            initial.convertTo(initial2f, CvType.CV_32FC2);
            trackedPoints.add(initial2f);
        }

        //catches first time frame
        if(prev.empty())
            curr.copyTo(prev);

        //find points in current image
        if(trackedPoints.get(0).total() > 0){
            Video.calcOpticalFlowPyrLK(prev, curr, trackedPoints.get(0), next2f, status, err);
            trackedPoints.add(next2f);
        }

        //average the distance moved
        // (-) signifies distance moved right and down
        // (+) signifies distance moved left and up
        for(int i = 0; i < trackedPoints.get(0).cols(); i++){
            for(int j = 0; j < trackedPoints.get(0).rows(); j++){
                point1 = trackedPoints.get(0).get(j, i);
                point2 = trackedPoints.get(1).get(j, i);
                total[0] += point1[0] - point2[0];
                total[1] += point1[1] - point2[0];
            }
        }

        output[0] = total[0] / trackedPoints.get(0).cols();
        output[1] = total[1] / trackedPoints.get(0).rows();
        return output;
    }

    private static Mat img2Mat(BufferedImage in){
        Mat out;
        byte[] data;
        int r, g, b;

        if(in.getType() == BufferedImage.TYPE_INT_RGB){
            out = new Mat(imHeight, imWidth, flag);
            data = new byte[imWidth * imHeight * (int)out.elemSize()];
            int[] dataBuff = in.getRGB(0, 0, imWidth, imHeight, null, 0, imWidth);
            for(int i = 0; i < dataBuff.length; i++){
                data[i*3] = (byte) ((dataBuff[i] >> 16) & 0xFF);
                data[i*3 + 1] = (byte) ((dataBuff[i] >> 8) & 0xFF);
                data[i*3 + 2] = (byte) ((dataBuff[i] >> 0) & 0xFF);
            }
        }
        else{
            out = new Mat(imHeight, imWidth, flag2);
            data = new byte[imWidth * imHeight * (int)out.elemSize()];
            int[] dataBuff = in.getRGB(0, 0, imWidth, imHeight, null, 0, imWidth);
            for(int i = 0; i < dataBuff.length; i++){
                r = (byte) ((dataBuff[i] >> 16) & 0xFF);
                g = (byte) ((dataBuff[i] >> 8) & 0xFF);
                b = (byte) ((dataBuff[i] >> 0) & 0xFF);
                data[i] = (byte)((0.21 * r) + (0.71 * g) + (0.07 * b)); //luminosity
            }
        }

        out.put(0, 0, data);
        return out;
    }

    private static BufferedImage mat2Img(Mat in){
        BufferedImage out;
        byte[] data = new byte[imWidth * imHeight * (int)in.elemSize()];
        int type;
        in.get(0, 0, data);

        if(in.channels() == 1)
            type = BufferedImage.TYPE_BYTE_GRAY;
        else
            type = BufferedImage.TYPE_3BYTE_BGR;

        out = new BufferedImage(imWidth, imHeight, type);
        out.getRaster().setDataElements(0, 0, imWidth, imHeight, data);
        return out;
    }

    private static BufferedImage rgb2Gray(BufferedImage in){
        BufferedImage out = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_BYTE_GRAY);
        Mat color = new Mat(imHeight, imWidth, flag);
        Mat gray = new Mat(imHeight, imWidth, flag);

        color = img2Mat(in); //converting bufferedImage to Mat
        Imgproc.cvtColor(color, gray, Imgproc.COLOR_RGB2GRAY); //change from color to grayscale
        out = mat2Img(gray); //converting Mat to bufferedImage

        return out;
    }

    //Converts bytebuffer to buffered image
    private static BufferedImage bufToImage(ByteBuffer pixelsRGB){
        int[] pixelInts = new int[imWidth * imHeight];
        int rowStart = 0;
        int bbIdx; //index to ByteBuffer
        int i = 0; //index to pixels
        int rowLen = imWidth * 3;

        for (int row = 0; row < imHeight; row++){
            bbIdx = rowStart;
            for(int col = 0; col < imWidth; col++){
                int pixR = pixelsRGB.get(bbIdx++);
                int pixG = pixelsRGB.get(bbIdx++);
                int pixB = pixelsRGB.get(bbIdx++);
                pixelInts[i++] = 0xFF000000 | ((pixR & 0xFF) << 16) | ((pixG & 0xFF) << 8) | (pixB & 0xFF);
            }
            rowStart += rowLen; //Move to next row
        }
        BufferedImage im = new BufferedImage(imWidth, imHeight, BufferedImage.TYPE_INT_RGB);
        im.setRGB(0, 0, imWidth, imHeight, pixelInts, 0, imWidth);
        return im;
    }

}

Upvotes: 0

Views: 1843

Answers (1)

Shofwan
Shofwan

Reputation: 451

Answer to the question is bit late but may help for future references.

I think to learn about object detection you can look here and his code here (I learn from it to do my project). And then I made my project like this based on his object detection. Or you can look for a simple background subtraction here

Upvotes: 3

Related Questions