Reputation: 23
I am trying to understand how to get the point cloud from the depth map obtained from iphone 11 pro (python). I tried to get the intrinsic matrix using an online exif tool.
Intrinsic_Matrix = [[3131.580078125, 0.0, 0.0], [0.0, 3131.580078125, 0.0], [1505.6236572265625, 2004.1282958984375, 1]]
I currently have a grayscale image varying between 1-255 representing the depth. I would like to get the depth coordinates (in meters).
Sorry, I only have some basic knowledge in this topic, any help would be appreciated. Below is the code I tried but failed.
img_depth = cv2.imread('Images/image_depth11.jpg')
h, w ,c = img_rgb.shape
new_img = img_depth[:, :, 0] # single channel
hd, wd = new_img.shape
#imC = cv2.applyColorMap(img_depth, cv2.COLORMAP_JET)
# intrinsic matrix iphone 11 Pro
Matrix = [[3131.580078125, 0.0, 0.0], [0.0, 3131.580078125, 0.0], [1505.6236572265625, 2004.1282958984375, 1]]
focalX = float(wd) * (Matrix[0][0] / wd)
focalY = float(hd) * (Matrix[1][1] / hd)
principalPointX = float(wd) * (Matrix[2][0] / wd)
principalPointY = float(hd) * (Matrix[2][1] / hd)
distance ( z direction) = 1/Z ( ie , 1/255 ??)
thanks Nit
Upvotes: 0
Views: 464
Reputation: 11
For absolute distance, you need to use a TrueDepth camera.
The TrueDepth camera produces disparity maps by default so that the resulting depth data is similar to that produced by a dual-camera device. However, unlike a dual-camera device, the TrueDepth camera can directly measure depth (in meters) with AVDepthData.Accuracy.absolute
accuracy. To capture depth instead of disparity, set the activeDepthDataFormat of the capture device before starting your capture session:
for further details on obtaining (x,y, depth or distance in meters from iPhone lens plane) values for a given pixel (x,y)
Upvotes: 1