We are developing a renderer using OpenGL.One of the features is ability to import FBX models.The importer module uses FBX SDK (2017). Now,as we all know,OpenGL is a right handed coordinate system.That's forward is from positive to negative,right is right and up vector is up.In our application the requirement to have forward vector positive,similar to DirectX. On the application level we set it by scaling Z of the projection matrix with -1. using glm math:
glm::mat4 persp = glm::perspectiveLH (glm::radians (fov), width / height, mNearPlane, mFarPlane);
Which is same as doing this:
glm::mat4 persp = glm::perspective (glm::radians (fov), width / height, mNearPlane, mFarPlane);
persp = glm::scale (persp , glm::vec3 (1.0f, 1.0f, -1.0f));
So far so good.The funny part comes when we import FBX models.
If using
FbxAxisSystem::OpenGL.ConvertScene(pSceneFbx);
Then transforming the vertices with the Node's global transform,which is calculated like this:
FbxSystemUnit fbxUnit = node->GetScene()->GetGlobalSettings().GetSystemUnit();
FbxMatrix globalTransform = node->EvaluateGlobalTransform();
glm::dvec4 c0 = glm::make_vec4((double*)globalTransform.GetColumn(0).Buffer());
glm::dvec4 c1 = glm::make_vec4((double*)globalTransform.GetColumn(1).Buffer());
glm::dvec4 c2 = glm::make_vec4((double*)globalTransform.GetColumn(2).Buffer());
glm::dvec4 c3 = glm::make_vec4((double*)globalTransform.GetColumn(3).Buffer());
The geometry faces are inverted. (Default CCW winding order in OpenGL)
If using DirectX converter:
FbxAxisSystem::DirectX.ConvertScene(pSceneFbx);
The model is both inverted and flipped upside-down.
OpenGL converted:
DirectX converted:
What we found that solves this problem,that's negating Z of the 3 column in that matrix.And also rotating it 180 degrees around Z axis.Otherwise the front of the model would be its back(yeah,sound tricky,but it makes sense when comparing between OpenGL and DirectX coordinate system difference.
So,the whole "conversion" matrix now looks like this:
FbxSystemUnit fbxUnit = node->GetScene()->GetGlobalSettings().GetSystemUnit();
FbxMatrix globalTransform = node->EvaluateGlobalTransform();
glm::dvec4 c0 = glm::make_vec4((double*)globalTransform.GetColumn(0).Buffer());
glm::dvec4 c1 = glm::make_vec4((double*)globalTransform.GetColumn(1).Buffer());
glm::dvec4 c2 = glm::make_vec4((double*)globalTransform.GetColumn(2).Buffer());
glm::dvec4 c3 = glm::make_vec4((double*)globalTransform.GetColumn(3).Buffer());
glm::mat4 mat =
glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,//flip z to get correct mesh direction (CCW)
0, 0, 0, 1);
//in this case the model faces look right dir ,but the model
//itself needs to be rotated because the camera looks at it from the
//wrong direction.
mat = glm::rotate(mat, glm::radians(180.0f), glm::vec3(0.0f, 0.0f, 1.0f));
glm::mat4 convertMatr = glm::mat4(c0, c1, c2, c3) *mat;
Then,transforming FBX model's vertices with that matrix gives the desired result:
Btw,how do we know the result is desired?We compare it to Unity3D game engine.
Now,this is the first time I am required to perform such a hack.It feels like a very nasty hack.Especially when it comes to skinned meshes,we need to transform with the conversion matrix also bone matrices,pose matrix and what not...
The question is,in this case,when we need to keep CCW winding order and have positive forward in OpenGL. Is that the only way to get the geometry transformations right?
(the 3d model source is 3DsMax,exported with Y-up.)
Most of the programming I've done in OpenGL has used a left handed system as opposed to DirectX's right handed system. The thing to understand about OpenGL versus DirectX is that OpenGL does not have a built in camera object. You have to create / supply your own camera and by doing so, you can literally set your coordinate system to be either a RHC or LHC depending on how you set up your stored matrices in most cases default settings are LHC. The other major difference between DirectX and OpenGL is that once you have your Scene and your Camera in 3D space with DirectX when you move the camera or rotate it; it is the Camera that actually moves where as in OpenGL it is not the camera that moves for it is fixed and it is the entire scene that moves relative to the camera. So the simplest way to illustrate the conversion is to understand or to know the operations of multiple matrix calculations and to know the order of your MVPs (Model - View - Projection) matrices. The conversion from a RHC to a LHC should be done and pre-calculated after opening and loading in the model file and then stored into your Model matrix. Then once you have the appropriate Model Matrix, the rest of the MVP calculations should be accurate.
These would be the basic steps
Here are a few very good reads on converting between one coordinate system and another:
The easiest thing to do when converting from RHC to LHC and visa versa is as long as the X is left and right and the Y is up and down all you need to do is negate every z coordinate, but this is only good for translation and not rotations. Now if some application has a RHC system and you are using a LHC system but that application has it's model data saved where say their Up vector is Z and their in and out vector is Y then you first need to swap every Y with Z then you need to negate the new Z after the swap and again this is good with only translations and not rotations.
Here is the basic formula for conversions:
which can be found from here: mathworks:mathlab *The math or formula is the same converting in either direction.