never_ever
never_ever

Reputation: 185

High RMS error while "online" cv:stereoCalibration

I have two cameras setted horizontally (close to each other). I have left camera cam1 and right camera cam2.

First I calibrate cameras (I want to calibrate 50 pairs of images):

  1. I calibrate both cameras separetely using cv::calibrateCamera()
  2. I calibrate stereo using cv::stereoCalibrate()

My questions:

  1. In stereoCalibrate - I assumed that the order of cameras data is important. If data from left camera should be the imagePoints1 and from right camera it should be imagePoints2 or vice versa or it doesn't matters as long as order of cameras is the same in every point of program?
  2. In stereoCalibrate - I get RMS error around 15,9319 and average reprojection error around 8,4536. I get that values if I use all images from cameras. In other case: first I save images, I select pairs where whole chessboard is visible (all of chessborad's squares is in camera view and every square is visible in its entirety) I get RMS around 0,7. If that means that only offline calibration is good and if I want to calibrate camera I should select good images manually? Or there is some way to do calibration online? By online I mean that I start capture view from camera and on every view I found chessboard corners and after stop capture view from camera I calibrate camera.
  3. I need only four values of distortion but I get five of them (with k3). In old api version cvStereoCalibrate2 I got only four values but in cv::stereoCalibrate I don't know how to do this? Is it even possible or the only way is to get 5 values and use only four of them later?

My code:

Mat cameraMatrix[2], distCoeffs[2];
distCoeffs[0] = Mat(4, 1, CV_64F);
distCoeffs[1] = Mat(4, 1, CV_64F);

vector<Mat> rvec1, rvec2, tvec1, tvec2;

double rms1 = cv::calibrateCamera(objectPoints, imagePoints[0], imageSize, cameraMatrix[0], distCoeffs[0],rvec1, tvec1, CALIB_FIX_K3, TermCriteria(
                                     TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON));

double rms2 = cv::calibrateCamera(objectPoints, imagePoints[1], imageSize, cameraMatrix[1], distCoeffs[1],rvec2, tvec2, CALIB_FIX_K3, TermCriteria(
                                     TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON));

qDebug()<<"Rms1: "<<rms1;
qDebug()<<"Rms2: "<<rms2;

Mat R, T, E, F;

double rms = cv::stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
   cameraMatrix[0], distCoeffs[0],
   cameraMatrix[1], distCoeffs[1],
   imageSize, R, T, E, F,
   TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
   CV_CALIB_FIX_INTRINSIC+
   CV_CALIB_SAME_FOCAL_LENGTH);

Upvotes: 0

Views: 641

Answers (2)

Dan
Dan

Reputation: 77

  1. The only thing why is the order of cameras/image sets important is the rotation and translation you get from stereoCalibrate function. The image set you put into the function as first is taken as the base. So the rotation and translation you get is how is the second camera translated and rotated from the first camera. Of course you can just reverse the result, which is the same as switching image sets. This of course holds only if the images in both sets are corresponding to each other (their order).

  2. This is a bit tricky, but there are few reasons why you are getting this big RMS error.

    • First, I'm not sure how you detect your chessboard corners, but if the whole chessboard is not visible and you provide valid chessboard model, findChessboardCorners should return false as it does not detect the chessboard. So you're able to automatically (=online) omit these "chessless" images. Of course you have to throw away also the image from second camera, even if that one is valid, to preserve correct order in both sets.
    • Second option is to back-project all corners for each image and calculate reprojection error for all images separately (not only for whole calibration). Then you can select, for example, best 3/4 images by this error and recalculate calibration without outliers.
    • Other reason could be the time sync between snapping images from 2 cameras. If the delay is big and you move with the chessboard continuously, you're actually trying to match projections of slightly translated chessboard.

    If you want robust online version I'm afraid you will end up with the second option, as it helps you also get rid of blurred images, wrong detections due to light conditions and so. You just need to set the threshold (how many images you will cut of as outliers) carefully to not throw away valid data.

  3. I'm not that sure in this field, but I would say you can calculate 5 of them and use only four coz it looks like you just cut off higher order of Taylor series. But I cannot guarantee it's true.

Upvotes: 0

I had a similar problem. My problem was that I was reading the left images and the right images by assuming that both were sorted. Here a part of the code in Python I fixed by using "sorted" in the second line.

images = glob.glob(path_left)
for fname in sorted(images):
    img = cv2.imread(fname)
    gray1 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    # Find the chess board corners
    ret, corners1 = cv2.findChessboardCorners(gray1, (n, m), None)
    # If found, add object points, image points (after refining them)
    if ret == True:
        i = i + 1
        print("Cam1. Chess pattern was detected")
        objpoints1.append(objp)
        cv2.cornerSubPix(gray1, corners1, (5, 5), (-1, -1), criteria)
        imgpoints1.append(corners1)
        cv2.drawChessboardCorners(img, (n, m), corners1, ret)
        cv2.imshow('img', img)
        cv2.waitKey(100)

Upvotes: 0

Related Questions