Reputation: 465
This may seem stupid but the more I read about camera-image-world co-ordinate systems and their intermediate conversions the more I am confused.
I am using depth maps synthetically generated using Blender 2.78 and using them for processing. This also involves cropping and resizing the images. I have my camera matrix (intrinsic, extrinsic parameters) and also matrix for perspective projection.
I am detecting some keypoints in these cropped, resized images and trying to get the world co-ordinates of these keypoints. If I am not wrong, the keypoints are defined in image co-ordinate system. But due to cropping they have different locations as opposed to the original image that I generated using Blender.
Will using the camera matrices to convert these keypoint co-ordinates into world coordinates give me the accurate world co-ordinates? Is there a way to verify whether these are the actual world co-ordinates of these image keypoints?
EDIT
Original image rendered using blender with resolution 960x540:
Image after pre-processing, cropping and resizing with detected SIFT keypoints (resolution 250x200):
Upvotes: 0
Views: 1934
Reputation: 20160
you'll have to adjust the camera intrinsics, since the principal point and pixel-size have changed. OR you just change the keypoint locations by the inverse resize/cropping operation!
Upvotes: 2