Reputation: 693
After checking several pieces of codes, I took several shots, found the chessboard corners and use them to get the camera matrix, distortion coefficients, rotation, and translation vectors. Now, can someone tell me which python opencv function do I need to calculate the distance in the real world from the 2D image? project points? For example, using a chessboard as a reference (see picture), if the tile size is 5cm, the distance for 4 tiles should be 20 cm. I saw some functions like projectPoints,findHomography, solvePnP but I am not sure which one do I need to solve my problem and get the transformation matrix between the camera world and the chessboard world. 1 single camera, same position of the camera for all cases but not exactly over the chessboard, and chessboard is placed over a planar object (table)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((nx * ny, 3), np.float32)
objp[:, :2] = np.mgrid[0:nx, 0:ny].T.reshape(-1, 2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob(path.join(calib_images_dir, 'calibration*.jpg'))
print(images)
# Step through the list and search for chessboard corners
for filename in images:
img = cv2.imread(filename)
imgScale = 0.5
newX,newY = img.shape[1]*imgScale, img.shape[0]*imgScale
res = cv2.resize(img,(int(newX),int(newY)))
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
pattern_found, corners = cv2.findChessboardCorners(gray, (nx,ny), None)
# If found, add object points, image points (after refining them)
if pattern_found is True:
objpoints.append(objp)
# Increase accuracy using subpixel corner refinement
cv2.cornerSubPix(gray,corners,(5,5),(-1,-1),(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1 ))
imgpoints.append(corners)
if verbose:
# Draw and display the corners
draw = cv2.drawChessboardCorners(res, (nx, ny), corners, pattern_found)
cv2.imshow('img',draw)
cv2.waitKey(500)
if verbose:
cv2.destroyAllWindows()
#Now we have our object points and image points, we are ready to go for calibration
# Get the camera matrix, distortion coefficients, rotation and translation vectors
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print(mtx)
print(dist)
print('rvecs:', type(rvecs),' ',len(rvecs),' ',rvecs)
print('tvecs:', type(tvecs),' ',len(tvecs),' ',tvecs)
mean_error = 0
for i in range(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print("total error: ", mean_error/len(objpoints))
imagePoints,jacobian = cv2.projectPoints(objpoints[0], rvecs[0], tvecs[0], mtx, dist)
print('Image points: ',imagePoints)
Upvotes: 13
Views: 7266
Reputation: 1638
Your problem relates mainly to camera-calibration especially in poor implementation of resolving camera distortion in opencv. You must approximate distortion function of your camera len by taking a few probes of distance in different coordinates of your chessboard. The good idea will be taking first small distance in the center of the len, then one square far taking second a little longer distance and repeat operation to the border. It will give you coefficients of your distortion function.
Matlab has own library to solve your problem with big accuracy, unfortunately it's quite expensive.
According to:
Now, can someone tell me which python opencv function do I need to calculate the distance in real world from the 2D image?
I think this article takes good explenation of python opencv set of functions to generate real measure. With resolving coefficients as I said above you could make good accuracy. Anyway I don't think so if it is an open source implementation of function like
cv2.GetRealDistance(...)
Upvotes: 0
Reputation: 161
You are indeed right, and I think you should use solvePnP for this problem. (Read more on perspective-n-point problems here: https://en.wikipedia.org/wiki/Perspective-n-Point.)
The Python OpenCV solvePnP function takes the following parameters and returns an ouput rotation and output translation vector which converts the model coordinate system to the camera coordinate system.
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
In your case the imagePoints will be the corners of the chessboard so it would look something like:
ret, rvec, tvec = cv2.solvePnP(objpoints, corners, mtx, dist)
With the returned translation vector you can calculate the distance from the camera to the chessboard. The output translation from solvePnP is in the same units as specified in objectPoints.
Finally, you can compute the real distance from the tvec as the euclidean distance:
d = math.sqrt(tx*tx + ty*ty + tz*tz).
Upvotes: 10