Reputation: 301
I'm developing a medical app that measures patients' wounds using computer vision for getting the wound 2d coordinates and AR for getting the depth by performing a hittest using the camera.
Lets say that my hitest result is
{
"type": "FeaturePoint",
"transform": {
"rotation": [0, 0, 0],
"position": [0.34072238206863403, -0.017041677609086037, 0.09095178544521332],
"scale": [1, 1, 1]
}
}
and the inference returned by my computer vision model is
{
"predictions": [
{
"x": 165.5,
"y": 209.5,
"width": 83,
"height": 53,
"confidence": 0.884,
"class": "wounds",
"points": [
{
"x": 140,
"y": 182.809
},
{
"x": 139.5,
"y": 183.477
},
{
"x": 138.5,
"y": 183.477
},
{
"x": 138,
"y": 184.144
},
{
"x": 137,
"y": 184.144
},
{
"x": 136.5,
"y": 184.811
},
{
"x": 136,
"y": 184.811
},
{
"x": 135,
"y": 186.145
},
{
"x": 134,
"y": 186.145
},
{
"x": 133.5,
"y": 186.813
},
{
"x": 133,
"y": 186.813
},
{
"x": 132.5,
"y": 187.48
},
{
"x": 132,
"y": 187.48
}
.....
}]
Can I use back projection equation,
that is, for each point 2d point (X,Y)
X = (x-cx)*Z/Fx
Y = (y-cy)*Z/Fy
where,
cx,cy = Principal point (image center).
fx,fy = focal length in pixels
Z = depth from the hittest
to map the 2d coordinates from the computer vision model to 3d real-world points? If not, what else can I do to measure the wound accurately?
I would greatly appreciate any guidance that can point me in the right direction. Thank you!
Upvotes: 0
Views: 36