m.ariuz
m.ariuz

Reputation: 31

How do to camera calibration using charuco board for OpenCV 4.10.0?

I am trying to do a camera calibration using OpenCV, version 4.10.0. I already got a working version for the usual checkerboard, but I can't figure out how it works with charuco. I would be grateful for any working code example.

What I tried: I tried following this tutorial: https://medium.com/@ed.twomey1/using-charuco-boards-in-opencv-237d8bc9e40d

It seems that essential functions like: cv.aruco.interpolateCornersCharuco and cv.aruco.interpolateCornersCharuco are missing. Even threw the documentation states an existing Python implementation, see: https://docs.opencv.org/4.10.0/d9/d6a/group__aruco.html#gadcc5dc30c9ad33dcf839e84e8638dcd1

I also tried following the official documentation for C++, see https://docs.opencv.org/4.10.0/da/d13/tutorial_aruco_calibration.html

The ArucoDetector in Python doesnt have the detectBoard method. So its also impossible to follow this tutorial in total.

But I guess from a hint in the documentation that the functions used by Medium are deprectated? But in no place marked as “removed”!

I already got the markers detected:

Detected image.

But then getting the object and image point fails:

`object_points_t, image_points_t = charuco_board.matchImagePoints( marker_corners, marker_ids)`

Any help or working code would be highly appreciated.

P.S.: My output of the “detectMarkers” method seems valid. Detected corners are of type

std::vector<std::vector<Point2f>.

(So translated to python an Array of Arrays containing the 4 points made by 2 coordinates each.) ID’s are

std::vector<int>

so in Python a list of Integers.

So I guess the python function “matchImagePoints” gets what it wants!

The marker detection seems succesful. I already tried changing the corner array: The detectMarkers method returns a tuple. I used the folloqing code to create the desired array of shape (X, 4, 2). (X beeing the number of deteced markers. Each has 4 corners with 2 coordinates x and y.)

marker_corners = np.array(marker_corners)
marker_corners = np.squeeze(marker_corners)

So I have the following:

marker_corners = [
[[8812. 5445.]
[8830. 5923.]
[8344. 5932.]
[8324. 5452.]],

[[7172. 5469.]
[7184. 5947.]
[6695. 5949.]
[6687. 5476.]],

[[3896. 5481.]
[3885. 5952.]
[3396. 5951.]
[3406. 5483.]],
...
]

marker_ids = [
[11],
[27],
[19],
...
]

Both, Passing the original return I get from

detector.detectMarkers

into the function and passing my modified array are failing. (Also not using squeeze and inputting a (X, 1, 4, 2) array fails!)

I can't get further anymore.

Minimum working example:

Use this picture: Charuco board 11x8

import cv2 as cv
import numpy as np
    
image = cv.imread("charuco_board.png")
im_gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
charuco_marker_dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_6X6_250)
charuco_board = cv.aruco.CharucoBoard(
size=(11, 8),
squareLength=500,
markerLength=300,
dictionary=charuco_marker_dictionary
)

# Initial method of this question:
params = cv.aruco.DetectorParameters()
detector = cv.aruco.ArucoDetector(charuco_marker_dictionary, params)
marker_corners, marker_ids, rejected_candidates = detector.detectMarkers(im_gray)
marker_corners = np.array(marker_corners)
marker_corners = np.squeeze(marker_corners)

    # Using cv.aruco.CharucoDetector as pointed out in the comments.
    detector_charuco = cv.aruco.CharucoDetector(charuco_board)
    result = detector_charuco.detectBoard(im_gray)
    marker_corners_charuco, marker_ids_charuco = result[2:]

    # Compare the two results
    assert (marker_ids == marker_ids_charuco).all()  # Detected ID's are identical.
    # assert (marker_corners == marker_corners_charuco).all()  # There seems to be a difference.
    print(marker_corners[0:2], marker_corners_charuco[0:2])  # They seem to be in a different order.

# Proof of the different order statement:
def reshape_and_sort(array):
    array_reshaped = array.copy().reshape(-1, 2)
    return np.array(sorted(array_reshaped, key=lambda x: x[0]**2 + x[1]**2))  # Using geometric distance between each point and the point (0, 0). Leaving out the square.

marker_corners_reshaped = reshape_and_sort(marker_corners)
marker_corners_reshaped_charuco = reshape_and_sort(np.array(marker_corners_charuco))
assert (marker_corners_reshaped == marker_corners_reshaped_charuco).all()


# Trying with new resutls:  # Still fails!
try:
    object_points_t, image_points_t = charuco_board.matchImagePoints(
        marker_corners_charuco,
        marker_ids_charuco
    )
except cv.error as err:
    print(err)

Upvotes: 1

Views: 576

Answers (2)

Huijo Kim
Huijo Kim

Reputation: 11

  1. Standard Checker Board does not care between row and col. [(3,5) == (5,3)]
  2. Standard Checker Board counts joints between blocks
  3. Charuco Board counts blocks row and col (order matters)

So, (5,7) of Standard Checker Board is equivalent to (6,8) of Charuco Board under OpenCV algorithm.

Since opencv massively updated (Ch)aruco Board part since 4.6, existing tutorials only partially works. (probably some legacy codes still function, but not complete.) Still some features will be updated, so it's recommended using the latest version. Here's the relevant discussion.

I managed to run Charuco of the recent version here

Upvotes: 1

m.ariuz
m.ariuz

Reputation: 31

I managed to get the calibration done with the official non-contrib opencv code. Here is a minimal working example:

The problem: Be careful when defining your detection board. A 11x8 charuco board has a different order of the aruco markers as an 8x11. Even if they look very similar when printed. The detecion will fail.

from typing import NamedTuple
import math

import matplotlib.pyplot as plt
import cv2 as cv
import numpy as np

class BoardDetectionResults(NamedTuple):
    charuco_corners: np.ndarray
    charuco_ids: np.ndarray
    aruco_corners: np.ndarray
    aruco_ids: np.ndarray


class PointReferences(NamedTuple):
    object_points: np.ndarray
    image_points: np.ndarray


class CameraCalibrationResults(NamedTuple):
    repError: float
    camMatrix: np.ndarray
    distcoeff: np.ndarray
    rvecs: np.ndarray
    tvecs: np.ndarray


SQUARE_LENGTH = 500
MARKER_LENGHT = 300
NUMBER_OF_SQUARES_VERTICALLY = 11
NUMBER_OF_SQUARES_HORIZONTALLY = 8

charuco_marker_dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_6X6_250)
charuco_board = cv.aruco.CharucoBoard(
size=(NUMBER_OF_SQUARES_HORIZONTALLY, NUMBER_OF_SQUARES_VERTICALLY),
squareLength=SQUARE_LENGTH,
markerLength=MARKER_LENGHT,
dictionary=charuco_marker_dictionary
)

image_name = f'ChArUco_Marker_{NUMBER_OF_SQUARES_HORIZONTALLY}x{NUMBER_OF_SQUARES_VERTICALLY}.png'
charuco_board_image = charuco_board.generateImage(
        [i*SQUARE_LENGTH
         for i in (NUMBER_OF_SQUARES_HORIZONTALLY, NUMBER_OF_SQUARES_VERTICALLY)]
)
cv.imwrite(image_name, charuco_board_image)


def plot_results(image_of_board, original_board, detection_results, point_references):
    fig, axes = plt.subplots(2, 2)
    axes = axes.flatten()
    img_rgb = cv.cvtColor(img_bgr, cv.COLOR_BGR2RGB)
    axes[0].imshow(img_rgb)
    axes[0].axis("off")

    axes[1].imshow(img_rgb)
    axes[1].axis("off")
    axes[1].scatter(
        np.array(detection_results.aruco_corners).squeeze().reshape(-1, 2)[:, 0],
        np.array(detection_results.aruco_corners).squeeze().reshape(-1, 2)[:, 1],
        s=5,
        c="green",
        marker="x",
    )
    axes[2].imshow(img_rgb)
    axes[2].axis("off")

    axes[2].scatter(
        detection_results.charuco_corners.squeeze()[:, 0],
        detection_results.charuco_corners.squeeze()[:, 1],
        s=20,
        edgecolors="red",
        marker="o",
        facecolors="none"
    )
    axes[3].imshow(cv.cvtColor(charuco_board_image, cv.COLOR_BGR2RGB))
    axes[3].scatter(
        point_references.object_points.squeeze()[:, 0],
        point_references.object_points.squeeze()[:, 1]
    )
    fig.tight_layout()
    fig.savefig("test.png", dpi=900)
    plt.show()


def generate_test_images(image):
    """Use random homograpy.

    -> Just to test detection. This doesn't simulate a perspective
    projection of one single camera! (Intrinsics change randomly.)
    For a "camera simulation" one would need to define fixed intrinsics
    and random extrinsics. Then cobine them into a projective matrix.
    And apply this to the Image. -> Also you need to add a random z
    coordinate to the image, since a projection is from 3d space into 2d
    space.
    """
    h, w = image.shape[:2]
    src_points = np.float32([[0, 0], [w, 0], [w, h], [0, h]])
    dst_points = np.float32([
        [np.random.uniform(w * -0.2, w * 0.2), np.random.uniform(0, h * 0.2)],
        [np.random.uniform(w * 0.8, w*1.2), np.random.uniform(0, h * 0.6)],
        [np.random.uniform(w * 0.8, w), np.random.uniform(h * 0.8, h)],
        [np.random.uniform(0, w * 0.2), np.random.uniform(h * 0.8, h*1.5)]
    ])
    homography_matrix, _ = cv.findHomography(src_points, dst_points)
    image_projected = cv.warpPerspective(image, homography_matrix, (w, h))
    return image_projected


def display_images(images):
    N = len(images)
    cols = math.ceil(math.sqrt(N))
    rows = math.ceil(N / cols)

    for i, img in enumerate(images):
        plt.subplot(rows, cols, i + 1)
        plt.imshow(img, cmap='gray')
        plt.axis('off')
    plt.tight_layout()
    plt.show()


# Create N test images based on the originaly created pattern.
N = 10
random_images = []
charuco_board_image = cv.cvtColor(charuco_board_image, cv.COLOR_GRAY2BGR)
for _ in range(N):
    random_images.append(generate_test_images(charuco_board_image))
display_images(random_images)


total_object_points = []
total_image_points = []
for img_bgr in random_images:
    img_gray = cv.cvtColor(img_bgr, cv.COLOR_BGR2GRAY)
  charuco_detector = cv.aruco.CharucoDetector(charuco_board)
  detection_results = BoardDetectionResults(
      *charuco_detector.detectBoard(img_gray)
  )

    point_references = PointReferences(
        *charuco_board.matchImagePoints(
            detection_results.charuco_corners,
            detection_results.charuco_ids
        )
    )
    plot_results(
        img_gray,
        charuco_board_image,
        detection_results,
        point_references
    )
    total_object_points.append(point_references.object_points)
    total_image_points.append(point_references.image_points)


calibration_results = CameraCalibrationResults(
    *cv.calibrateCamera(
        total_object_points,
        total_image_points,
        img_gray.shape,
        None,
        None
    )
)



"""P.S.: Markers are too small in bigger pictures. They seem to not be adjustable.
img_bgr_aruco = cv.aruco.drawDetectedMarkers(
    img_bgr.copy(),
    detection_results.aruco_corners
)
img_bgr_charuco = cv.aruco.drawDetectedCornersCharuco(
    img_bgr.copy(),
    detection_results.charuco_corners
)
"""

The other possibility is to install pip install opencv-contrib-python and not pip install opencv-python. At best make sure you install it in a new enviroment without any old installations.

Here the following two essential functions will be available:

cv.aruco.interpolateCornersCharuco
cv.aruco.calibrateCameraCharuco

For a more indepth explanation see here:

Upvotes: 1

Related Questions