LucaLumetti
LucaLumetti

Reputation: 340

face detection with OpenCV and nodejs

i'm try to make a face detection with nodejs and opencv.

var cv = require('opencv');

// camera properties
var camWidth = 320;
var camHeight = 240;
var camFps = 10;
var camInterval = 1000 / camFps;

// face detection properties
var rectColor = [0, 255, 0];
var rectThickness = 1;

// initialize camera
var camera = new cv.VideoCapture(0);
camera.setWidth(camWidth);
camera.setHeight(camHeight);

module.exports = function (socket) {
  setInterval(function() {
    sTime = new Date();
    camera.read(function(err, im) {
      if (err) throw err;
        im.detectObject('/usr/lib/node_modules/opencv/data/lbpcascades/lbpcascade_frontalface.xml', {}, function(err, faces) {
          if (err) throw err;

          for (var i = 0; i < faces.length; i++) {
            face = faces[i];
            im.rectangle([face.x, face.y], [face.width, face.height], rectColor, rectThickness);
          }
          socket.emit('frame', { buffer: im.toBuffer() });
        });
    });
  }, camInterval);
};

im.detectObject take 80/120 seconds to execute and over time it creates a big delay between the actual image that the camera sees and what I see on the PC with the rectangle around my face. how can I improve that and delete the "lag"?

Upvotes: 0

Views: 1840

Answers (2)

Rupan RC
Rupan RC

Reputation: 11

Try This

 im.detectObject(cv.FACE_CASCADE, {}, function(err, faces) {
        if (err) throw err;
;
        for (var i = 0; i < faces.length; i++) {
          var f1 = faces.length;
          face = faces[i];
          im.rectangle([face.x, face.y], [face.width, face.height], rectColor, rectThickness);
        }

      
      im.save('image.jpg');

        console.log('image saved');
        console.log(f1);
        fs.readFile('image.jpg', function (err, buffer) {
          socket.emit('image', {buffer: buffer , faces : f1 });
        });

im.toBuffer is the reason for that lag. I just saved the file and then read it again and buffered it back. I've also added the number of faces detected to the emit.

Upvotes: 1

nessuno
nessuno

Reputation: 27052

When you got the first match you have a set of ROIs. At this point you could stop using the detection algorithm and start to use a tracking algorithm (with motion estimation it will work better).

If you don't want/need the performance of a tracking algorithm, you could fallback on a template matching algorithm. Using the detected faces as templates and the current frame as the destination image.

I did the same in a C++ project. Here's the code I used to "track" the detected faces (stored into _camFaces that has the same role of your `faces' array).

The code below is executed after a detection triggered and _camFaces has been filled with a set of pairs. Every pair consists of:

  1. a rectangle, that contains the dimension and the position of the roi in the previous frame.
  2. the ROI, in gray scale. That ROi will be used as a template, for the template matching algorithm.

.

cv::Mat1b grayFrame = Preprocessor::gray(frame)
for (auto& pair : _camFaces) {
  cv::Mat1f dst;
  cv::matchTemplate(grayFrame, pair.second, dst, CV_TM_SQDIFF_NORMED);

  double minval, maxval;
  cv::Point minloc, maxloc;
  cv::minMaxLoc(dst, &minval, &maxval, &minloc, &maxloc);

  if (minval <= 0.2) {
    pair.first.x = minloc.x;
    pair.first.y = minloc.y;
    noneTracked = false;
  } else {
    pair.first.x = pair.first.y = pair.first.width = pair.first.height = 0;
  }
}
// draw rectangles
cv::Mat frame2;
frame.copyTo(frame2);

for (const auto& pair : _camFaces) {
  cv::rectangle(frame2, pair.first, cv::Scalar(255, 255, 0), 2);
}
_updateCamView(frame2);

Upvotes: 1

Related Questions