dim_tz
dim_tz

Reputation: 1531

Eigen - Re-orthogonalization of Rotation Matrix

After multiplying a lot of rotation matrices, the end result might not be a valid rotation matrix any more, due to rounding issues (de-orthogonalized)

One way to re-orthogonalize is to follow these steps:

  1. Convert the rotation matrix to an axis-angle representation (link)
  2. Convert back the axis-angle to a rotation matrix (link)

Is there something in Eigen library that does the same thing by hiding all the details? Or is there any better recipe?

This procedure has to be handled with care due to special singularity cases, so if Eigen provides a better tool for this it would be great.

Upvotes: 19

Views: 12530

Answers (7)

Mikael Tulldahl
Mikael Tulldahl

Reputation: 11

Here is an implementation using Eigen. It will minimally impact the rotation represented by the rotation matrix by adjusting any two vectors which are not orthogonal equally towards/away from each other.

void orthonormalize(Eigen::Matrix3d &rot) {
  Vector3d x = rot.col(0);
  Vector3d y = rot.col(1);
  Vector3d z = rot.col(2);

  // normalize
  x.normalize();
  y.normalize();
  z.normalize();

  // orthogonalize
  double errorXY = 0.5 * x.dot(y);
  double errorYZ = 0.5 * y.dot(z);
  double errorZX = 0.5 * z.dot(x);
  rot.col(0) = x - errorXY * y - errorZX * z;
  rot.col(1) = y - errorXY * x - errorYZ * z;
  rot.col(2) = z - errorYZ * y - errorZX * x;
}

We can test that it actually works with

double orthonormalityError(const Matrix3d &rot) {
  Matrix3d prod = rot * rot.transpose();
  return (prod - Matrix3d::Identity()).norm();
}

Matrix3d randomRotationMatrix() {
  Matrix3d rand_mat = Matrix3d::Random();
  HouseholderQR<Matrix3d> qr(rand_mat);
  return rot = qr.householderQ();
}

 void testOrthonormalize() {
  Eigen::Matrix3d rot = randomRotationMatrix();
  rot += Eigen::Matrix3d::Random() * 0.1;
  std::cout << orthonormalityError(rot) << std::endl;
  for (int j = 0; j < 4; j++) {
    orthonormalize(rot);
    std::cout << orthonormalityError(rot) << std::endl;
  }
}

int main(int argc, char** argv) {
  testOrthonormalize();
  return 0;
}

Which prints:

0.246795
0.00948343
1.45409e-05
3.59979e-11
3.4066e-16

Upvotes: 0

Trey Reynolds
Trey Reynolds

Reputation: 763

M is the matrix we want to orthonormalize, and R is the rotation matrix closest to M.

Analytic Solution

Matrix R = M*inverse(sqrt(transpose(M)*M));

Iterative Solution

// To re-orthogonalize matrix M, repeat:
M = 0.5f*(inverse(transpose(M)) + M);
// until M converges

M converges to R, the nearest rotation matrix. The number of digits of accuracy will approximately double with each iteration.

Check whether the sum of the squares of the elements of (M - M^-T)/2 is less than the square of your error goal to know when (M + M^-T)/2 meets your accuracy threshold. M^-T is the inverse transpose of M.

Why It Works

We want to find the rotation matrix R which is closest to M. We will define the error as the sum of squared differences of the elements. That is, minimize trace((M - R)^T (M - R)).

The analytic solution is R = M (M^T M)^-(1/2), outlined here.

The problem is that this requires finding the square root of M^T M. However, if we notice, there are many matrices whose closest rotation matrix is R. One of these is M (M^T M)^-1, which simplifies to M^-T, the inverse transpose. The nice thing is that M and M^-T are on opposite sides of R, (intuitively like a and 1/a are on opposite side of 1).

We recognize that the average, (M + M^-T)/2 will be closer to R than M, and because it is a linear combination, will also maintain R as the closest rotation matrix. Doing this recursively, we converge to R.

Worst Case Convergence (Speculative)

Because it is related to the Babylonian square root algorithm, it converges quadratically.

The exact worst case error after one iteration of a matrix M and error e is nextE:

nextE = e^2/(2 e + 2)
e = sqrt(trace((M - R)^T (M - R)))
R = M (M^T M)^-(1/2)

Upvotes: 3

Ali
Ali

Reputation: 58461

I don't use Eigen and didn't bother to look up the API but here is a simple, computationally cheap and stable procedure to re-orthogonalize the rotation matrix. This orthogonalization procedure is taken from Direction Cosine Matrix IMU: Theory by William Premerlani and Paul Bizard; equations 19-21.

Let x, y and z be the row vectors of the (slightly messed-up) rotation matrix. Let error=dot(x,y) where dot() is the dot product. If the matrix was orthogonal, the dot product of x and y, that is, the error would be zero.

The error is spread across x and y equally: x_ort=x-(error/2)*y and y_ort=y-(error/2)*x. The third row z_ort=cross(x_ort, y_ort), which is, by definition orthogonal to x_ort and y_ort.

Now, you still need to normalize x_ort, y_ort and z_ort as these vectors are supposed to be unit vectors.

x_new = 0.5*(3-dot(x_ort,x_ort))*x_ort
y_new = 0.5*(3-dot(y_ort,y_ort))*y_ort
z_new = 0.5*(3-dot(z_ort,z_ort))*z_ort

That's all, were are done.

It should be pretty easy to implement this with the API provided by Eigen. You can easily come up with other orthoginalization procedures but I don't think it will make a noticable difference in practice. I used the above procedure in my motion tracking application and it worked beatifully; it's both stable and fast.

Upvotes: 16

chtz
chtz

Reputation: 18807

An alternative is to use Eigen::Quaternion to represent your rotation. This is much easier to normalize, and rotation*rotation products are generally faster. If you have a lot of rotation*vector products (with the same matrix), you should locally convert the quaternion to a 3x3 matrix.

Upvotes: 1

DavidS
DavidS

Reputation: 513

Singular Value Decomposition should be very robust. To quote from the reference:

Let M=UΣV be the singular value decomposition of M, then R=UV.

For your matrix, the singular-values in Σ should be very close to one. The matrix R is guaranteed to be orthogonal, which is the defining property of a rotation matrix. If there weren't any rounding errors in calculating your original rotation matrix, then R will be exactly the same as your M to within numerical precision.

Upvotes: 6

dim_tz
dim_tz

Reputation: 1531

In the meantime:

#include <Eigen/Geometry>

Eigen::Matrix3d mmm;
Eigen::Matrix3d rrr;
                rrr <<  0.882966, -0.321461,  0.342102,
                        0.431433,  0.842929, -0.321461,
                       -0.185031,  0.431433,  0.882966;
                     // replace this with any rotation matrix

mmm = rrr;

Eigen::AngleAxisd aa(rrr);    // RotationMatrix to AxisAngle
rrr = aa.toRotationMatrix();  // AxisAngle      to RotationMatrix

std::cout <<     mmm << std::endl << std::endl;
std::cout << rrr     << std::endl << std::endl;
std::cout << rrr-mmm << std::endl << std::endl;

Which is nice news, because I can get rid of my custom method and have one headache less (how can one be sure that he takes care of all singularities?),

but I really want your opinion on better/alternative ways :)

Upvotes: 2

Lutz Lehmann
Lutz Lehmann

Reputation: 25992

You can use a QR decomposition to systematically re-orthogonalize, where you replace the original matrix with the Q factor. In the library routines you have to check and correct, if necessary, by negating the corresponding column in Q, that the diagonal entries of R are positive (close to 1 if the original matrix was close to orthogonal).

The closest rotation matrix Q to a given matrix is obtained from the polar or QP decomposition, where P is a positive semi-definite symmetric matrix. The QP decomposition can be computed iteratively or using a SVD. If the latter has the factorization USV', then Q=UV'.

Upvotes: 11

Related Questions