Reputation: 489
I had followed this tutorial and got the output animation for a rigged model as expected. The tutorial uses assimp, glsl and c++ to load a rigged model from a file. However, there were things that I couldn't figure out. First thing is assimp's transformation matrix are row major matrices and the tutorial uses a Matrix4f class which uses those transformation matrices just as they are i.e. row major order. The constructor of that Matrix4f class is as given:
Matrix4f(const aiMatrix4x4& AssimpMatrix)
{
m[0][0] = AssimpMatrix.a1; m[0][2] = AssimpMatrix.a2; m[0][2] = AssimpMatrix.a3; m[0][3] = AssimpMatrix.a4;
m[1][0] = AssimpMatrix.b1; m[1][3] = AssimpMatrix.b2; m[1][2] = AssimpMatrix.b3; m[1][3] = AssimpMatrix.b4;
m[2][0] = AssimpMatrix.c1; m[2][4] = AssimpMatrix.c2; m[2][2] = AssimpMatrix.c3; m[2][3] = AssimpMatrix.c4;
m[3][0] = AssimpMatrix.d1; m[3][5] = AssimpMatrix.d2; m[3][2] = AssimpMatrix.d3; m[3][3] = AssimpMatrix.d4;
}
However, in the tutorial for calculating the final node transformation, the calculations are done expecting the matrices to be in column major order, which is shown below:
Matrix4f NodeTransformation;
NodeTransformation = TranslationM * RotationM * ScalingM; //note here
Matrix4f GlobalTransformation = ParentTransform * NodeTransformation;
if(m_BoneMapping.find(NodeName) != m_BoneMapping.end())
{
unsigned int BoneIndex = m_BoneMapping[NodeName];
m_BoneInfo[BoneIndex].FinalTransformation = m_GlobalInverseTransform * GlobalTransformation * m_BoneInfo[BoneIndex].BoneOffset;
m_BoneInfo[BoneIndex].NodeTransformation = GlobalTransformation;
}
Finally, since the matrices calculated are in row major order, it is specified so while passing the matrices in the shader by setting GL_TRUE flag in the following function. Then, openGL knows it is in row major order as openGL itself uses column major order.
void SetBoneTransform(unsigned int Index, const Matrix4f& Transform)
{
glUniformMatrix4fv(m_boneLocation[Index], 1, GL_TRUE, (const GLfloat*)Transform);
}
So, how does the calculation done considering column major order
transformation = translation * rotation * scale * vertices
yield a correct output. I expected that for the calculation to hold true, each matrices should first be transposed to change to column order, followed by the above calculation and finally transposed again to obtain back row order matrix, which is also discussed in this link. However, doing so produced a horrible output. Is there something that I am missing here?
Upvotes: 4
Views: 2557
Reputation: 41
Yes, the memory layout is similar for glm and assimp : data.html
But, according to the doc page : classai_matrix4x4t
The assimp matrix is always row-major whereas the glm matrix is always col-major meaning you need to create a transponse on conversion:
inline static Mat4 Assimp2Glm(const aiMatrix4x4& from)
{
return Mat4(
(double)from.a1, (double)from.b1, (double)from.c1, (double)from.d1,
(double)from.a2, (double)from.b2, (double)from.c2, (double)from.d2,
(double)from.a3, (double)from.b3, (double)from.c3, (double)from.d3,
(double)from.a4, (double)from.b4, (double)from.c4, (double)from.d4
);
}
inline static aiMatrix4x4 Glm2Assimp(const Mat4& from)
{
return aiMatrix4x4(from[0][0], from[1][0], from[2][0], from[3][0],
from[0][1], from[1][1], from[2][1], from[3][1],
from[0][2], from[1][2], from[2][2], from[3][2],
from[0][3], from[1][3], from[2][3], from[3][3]
);
}
PS: The abcd stands for row and 1234 stands for col in assimp.
Upvotes: 1
Reputation: 45322
You are confusing two different things:
It is often claimed that when working with row major vs. column major, things have to be transposed and matrix multipication order hase to be reversed. But this is not true.
What is true is that mathematically, transpose(A*B) = transpose(B) * transpose(A)
. However, that is irrelevant here, because the matrix storage order is independent of, and orthogonal to, the mathematical interpretation of the matrices.
What I mean by this is: In math, it is exactly defined what a row and a column of a matrix is, and each element can be uniquely addressed by these two "coordinates". All the matrix operations are defined based on this convention. For example, in C=A*B
, the element in the first row and the first column of C
, is calculated as the dot product of the first row of A
(transposed to a column vector) and the first column of B
.
Now, the matrix storage order just defines how the matrix data is laid out in memory. As a generalization, we could define a function f(row,col)
mapping each (row, col)
pair to some memory address. We now could write or matrix functions using f
, and we could change f
to adapt row-major, column-major or something completely else (like a Z order curve, if we want some fun).
It doesn't matter what f
we actually use (as long as the mapping is bijective), the operation C=A*B
will always have the same result. What changes is just the data in memory, but we have also to use f
to interpet that data. We could just write a simple print function, also using f
, to print the matrix as the 2D array in columns x rows as a typical human would expect.
The confusion comes from this fact when you use a matrix in a different layout than the implementation of the matrix functions is designed on.
If you have a matrix library which is internally assuimg colum-major layout, and pass in data in row-major format, it is as if you transformed that matrix before - and only at this point, things get screwed up.
To confuse things even more, there is another issue related to this: the matrix * vector vs vector * matrix issue. Some people like to write x' = x * M
(with v'
and v
being row vectors), while others like to write y' = N *y
(with column vectors). It is clear that mathematically, M*x = transpose((transpose(x) * transpose(M))
, so that people often also confuse this with row- vs column-major order effects - but it is also totally independent of that. It is just a matter of convention if you want to use the one or the other.
So, to finally answer your question:
The transformation matrices created there are written for the convention of multyplying matrix * vector, so that Mparent * Mchild
is the correct matrix multiplication order.
Up to this point, the actual data layout in memory does not matter at all. It only begins to matter because now, we are interfacing a different API, with its own conventions. GL's default order is column-major. The matrix class in use is written for row-major memory layout. So you just transpose at this point, so that GL's interpretation of that matrix matches your other library's.
The alternative would be not convert them and account for that by incorporating the implicit operation created by this into the system - either by changing the multiplication order in the shader, or by adjusting the operations which created the matrix in the first place. However, I would not recommend going that path, because the resulting code will be totally unintuitive, because in the end, this would mean working with column-major matrices in a matrix class using a row-major interpretation.
Upvotes: 4