skimon
skimon

Reputation: 1189

Applying fast inverse to concatenated 4x4 affine transforms?

Is it possible to apply the fast inverse of a matrix to a concatenation of pure rotation and translation matrices, eg M = T2*R1*T1*R1?

If I have a rotation and translation stored in a 4x4 homogeneous column order matrix I can say:

M1 = [ R1  t1 ]  given by [ 1  t1 ] * [ R1 0 ]
     [ 0    1 ]           [ 0   1 ]   [ 0  1 ]

and

inv(M1) = [inv(R1)  inv(R1)*-t1 ] given by [ 1  -t1 ] * [ inv(R1) 0 ] 
          [ 0             1     ]          [ 1    1 ] * [ 0       1 ]

and since R1 is rotation only we know inv(R1) = transpose(R1) so we can simply say:

inv(M1) = [transp(R1)  transp(R1)*-t1 ] 
          [ 0               1         ]  

and now given some other similar rotation and translation matrix M2, if we say the concatenation of the the two in the form MFinal = M2 * M1 = T2*R1*T1*R1

can we say that

inv(MFinal) = [transp(MFinalRot)  transp(MFinalRot)*-tfinal ] 
              [ 0                             1             ] 

where MFinalRot is the rotation part of the 4x4 matrix?

Additionally what if the order were more arbitrary for example MFinal2 = R3*T3 * T2*R2*T1*R1 , but still only individually rotations and translations?

Upvotes: 1

Views: 891

Answers (2)

ddbc
ddbc

Reputation: 16

The inverse of the product (P=AB) of two square matrices is in general Inv(B)*Inv(A). Rotations and translations will commute. In general you have to unwind the operations in the reverse of the order in which they were applied.

In this case though, R1*T1*R2*T2=R1*R2*T1*T2 and you can then compute the inverse of the concatenation as the inverse of the composition of the individual rotations and translations.

So yes, this is sound for pure rotations and translations.

Upvotes: 0

comingstorm
comingstorm

Reputation: 26117

Yes, if your 4x4 matrix is the concatenation of pure rotation and translation matrices, you should be able to compute a fast inverse as:

fast_inverse( [R1  t1] ) = [transpose(R1)  transpose(R1)*(-t1)]
              [0    1]     [     0                1           ]

This is because the 3x3 rotation matrix (R1 in your code), will be a product of the input rotation matrices only, so it should itself be a rotation matrix, and its transpose should be its inverse.

If any of your concatenated matrices are scaling matrices, or if the bottom row is not [0 0 0 1], then this is not true any more.

Also note that: in practice, if you multiply enough matrices together, floating point error may cause them to "drift" some, so that they may not be as close to a proper rotation matrix as a freshly-generated one. Depending on how you use it, this may not be a problem -- but if it is, you can "re-orthonormalize" it, as below:

orth(Vec3 a, Vec3 b): // return value orthogonal to b
  return (a - (dot(a,b)/dot(b,b)) * b)

re_orthonormalize(Mat3x3 Rin):
  Vec3 x = Rin.x;
  Vec3 y = orth(Rin.y, x);
  Vec3 z = orth(orth(Rin.z, x), y);
  return Mat3x3(normalize(x),normalize(y),normalize(z))

As long as your input isn't too far off, this should give you a proper rotation matrix.


To see how the re_orthonormalize code works, first take the dot product of the orth output with its b input. Because the dot product is linear, we have:

dot(a - (dot(a,b)/dot(b,b)*b, b)
  == dot(a,b) - (dot(a,b)/dot(b,b)) * dot(b,b)
  == dot(a,b) - dot(a,b)
  == 0

So, if a and b are already mostly orthogonal, ortho(a,b) adds a small amount of b to make sure the dot product really is 0.

That means in re_orthonormalize, y is exactly orthogonal to x. The tricky bit is making sure that z is orthogonal to both x and y. This only works because we have already made sure x is exactly orthogonal to y, so adding a little bit of y doesn't stop orth(Rin.z, x) from being orthogonal to x.

Upvotes: 1

Related Questions