McLovin
McLovin

Reputation: 3415

Quaternion based camera

I try to implement an FPS camera based on quaternion math. I store a rotation quaternion variable called _quat and multiply it by another quaternion when needed. Here's some code:

void Camera::SetOrientation(float rightAngle, float upAngle)//in degrees
{
    glm::quat q = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
              q*= glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));

    _quat = q;
}

void Camera::OffsetOrientation(float rightAngle, float upAngle)//in degrees
{
    glm::quat q = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
              q*= glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));

    _quat *= q;
}

The application can request the orientation matrix via GetOrientation, which simply casts the quaternion to a matrix.

glm::mat4 Camera::GetOrientation() const
{
    return glm::mat4_cast(_quat);
}

The application changes the orientation in the following way:

int diffX = ...;//some computations based on mouse movement
int diffY = ...;

camera.OffsetOrientation(g_mouseSensitivity * diffX, g_mouseSensitivity * diffY);

This results in bad, mixed rotations around pretty much all the axes. What am I doing wrong?

Upvotes: 10

Views: 15967

Answers (4)

Gregery Barton
Gregery Barton

Reputation: 56

The up angle rotation should be pre multiplied, post multiplying will rotate the world around the origin through (1,0,0), pre-multiplying will rotate the camera.

glm::quat q_up = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
          q_right = glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));

    _quat *= q_right;
    _quat = q_up * _quat;

Upvotes: 0

dwn
dwn

Reputation: 563

I haven't used GLM, so maybe you won't like this answer. However, performing quaternion rotation is not bad.

Let's say your camera has an initial saved orientation 'vecOriginalDirection' (a normalized vec3). Let's say you want it to follow another 'vecDirection' (also normalized). This way we can adapt a Trackball-like approach, and treat vecDirection as a deflection from whatever is the default focus of the camera.

The usually preferred way to do quaternion rotation in the real world is using NLERP. Let's see if I can remember: in pseudocode (assuming floating-point) I think it's this:

quat = normalize([   cross(vecDirection, vecOriginalDirection),
                  1. + dot(vecDirection, vecOriginalDirection)]);

(Don't forget the '1. +'; I forget why it's there, but it made sense at one time. I think I pulled my hair out for a few days until finding it. It's basically the unit quaternion, IIRC, which is getting averaged in, thereby making the double-angle act like the angle... maybe :))

Renormalizing, shown above as 'normalize()', is essential (it's the 'N' in NLERP). Of course, normalizing quat (x,y,z,w) is just:

quat /= sqrt(x*x+y*y+z*z+w*w);

Then, if you want to use your own function to make a 3x3 orientation matrix from quat:

xx=2.*x*x,
yy=2.*y*y,
zz=2.*z*z,
xy=2.*x*y,
xz=2.*x*z,
yz=2.*y*z,
wx=2.*w*x,
wy=2.*w*y,
wz=2.*w*z;
m[0]=1.-(yy+zz);
m[1]=xy+wz;
m[2]=xz-wy;
m[3]=xy-wz;
m[4]=1.-(xx+zz);
m[5]=yz+wx;
m[6]=xz+wy;
m[7]=yz-wx;
m[8]=1.-(xx+yy);

To actually implement a trackball, you'll need to calculate vecDirection when the finger is held down, and save it off to vecOriginalDirection when it is first pressed down (assuming touch interface).

You'll also probably want to calculate these values based on a piecewise half-sphere/hyperboloid function, if you aren't already. I think @minorlogic was trying to save some tinkering, since it sounds like you might be able to just use a drop-in virtual trackball.

Upvotes: 4

Damon
Damon

Reputation: 70206

The problem

As already pointed out by GuyRT, the way you do accumulation is not good. In theory, it would work that way. However, floating point math is far from being perfectly precise, and errors accumulate the more operations you do. Composing two quaternion rotations is 28 operations versus a single operation adding a value to an angle (plus, each of the operations in a quaternion multiplication affects the resulting rotation in 3D space in a very non-obvious way).
Also, quaternions used for rotation are rather sensible to being normalized, and rotating them de-normalizes them slightly (rotating them many times de-normalizes them a lot, and rotating them with another, already de-normalized quaternion amplifies the effect).

Reflection

Why do we use quaternions in the first place?

Quaternions are commonly used for the following reasons:

  1. Avoiding the dreaded gimbal lock (although a lot of people don't understand the issue, replacing three angles with three quaternions does not magically remove the fact that one combines three rotations around the unit vectors -- quaternions must be used correctly to avoid this problem)
  2. Efficient combination of many rotations, such as in skinning (28 ops versus 45 ops when using matrices), saving ALU.
  3. Fewer values (and thus fewer degrees of freedom), fewer ops, so less opportunity for undesirable effects compared to using matrices when combining many transformations.
  4. Fewer values to upload, for example when a skinned model has a couple of hundred bones or when drawing ten thousand instances of an object. Smaller vertex streams or uniform blocks.
  5. Quaternions are cool, and people using them are cool.

Neither of these really make a difference for your problem.

Solution

Accumulate the two rotations as angles (normally undesirable, but perfectly acceptable for this case), and create a rotation matrix when you need it. This can be done either by combining two quaternions and converting to a matrix as in GuyRT's answer, or by directly generating the rotation matrix (which is likely more efficient, and all that OpenGL wants to see is that one matrix anyway).

To my knowledge, glm::rotate only does rotate-around-arbitrary-axis. Which you could of course use (but then you'd rather combine two quaternions!). Luckily, the formula for a matrix combining rotations around x, then y, then z is well-known and straightforward, you find it for example in the second paragraph of (3) here.
You do not wish to rotate around z, so cos(gamma) = 1 and sin(gamma) = 0, which greatly simplifies the formula (write it out on a piece of paper).

Using rotation angles is something that will make many people shout at you (often not entirely undeserved).
A cleaner alternative is keeping track of the direction you look at either with a vector pointing from your eye in the direction where you wish to look, or by remembering the point in space that you look at (this is something that combines nicely with physics in a 3rd person game, too). That also needs an "up" vector if you want to allow arbitrary rotations -- since then "up" isn't always the world space "up" -- so you may need two vectors. This is much nicer and more flexible, but also more complex.
For what is desired in your example, a FPS where your only options are to look left-right and up-down, I find rotation angles -- for the camera only -- entirely acceptable.

Upvotes: 8

GuyRT
GuyRT

Reputation: 2917

The problem is the way that you are accumulating rotations. This would be the same whether you use quaternions or matrices. Combining a rotation representing pitch and yaw with another will introduce roll.

By far the easiest way to implement an FPS camera is to simply accumulate changes to the heading and pitch, then convert to a quaterion (or matrix) when you need to. I would change the methods in your camera class to:

void Camera::SetOrientation(float rightAngle, float upAngle)//in degrees
{
    _rightAngle = rightAngle;
    _upAngle = upAngle;
}

void Camera::OffsetOrientation(float rightAngle, float upAngle)//in degrees
{
    _rightAngle += rightAngle;
    _upAngle += upAngle;
}

glm::mat4 Camera::GetOrientation() const
{
    glm::quat q = glm::angleAxis(glm::radians(-_upAngle), glm::vec3(1,0,0));
              q*= glm::angleAxis(glm::radians(_rightAngle), glm::vec3(0,1,0));
    return glm::mat4_cast(q);
}

Upvotes: 9

Related Questions