I implemented simple OBJ-parser and using parallelepiped as example model. I added rotation feature based on quaternions. Next goal - adding light. I parsed normals and decided drawing normals as "debug" feature (for further better understanding light). But I stuck after that:
Here my parallelepiped with small rotation. Look at the right further bottom vertice and normal. I can't understand why it rendered through my parallelepiped. It should be hidden.
I use depth buffer (because without it parallelepiped looking weird while I rotate it). So I initialize it:
glGenRenderbuffers(1, &_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _frameBufferWidth, _frameBufferHeight);
and enable it:
glEnable(GL_DEPTH_TEST);
I generate 4 VBOs: vertex and index buffers for parallelepiped, vertex and index buffers for lines(normals). I use one simple shader for both models(if it will be needed - I can add code later but I think everything is ok with it). At first I draw parallelepiped, after that - normals. Here my code:
// _field variable - parallelepiped
glClearColor(0.3, 0.3, 0.4, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int vertexSize = Vertex::size();
int colorSize = Color::size();
int normalSize = Normal::size();
int totalSize = vertexSize + colorSize + normalSize;
GLvoid *offset = (GLvoid *)(sizeof(Vertex));
glBindBuffer(GL_ARRAY_BUFFER, _geomBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indicesBufferID);
glVertexAttribPointer(_shaderAtributePosition, vertexSize, GL_FLOAT, GL_FALSE, sizeof(Vertex::oneElement()) * totalSize, 0);
glVertexAttribPointer(_shaderAttributeColor, colorSize, GL_FLOAT, GL_FALSE, sizeof(Color::oneElement()) * totalSize, offset);
glDrawElements(GL_TRIANGLES, _field->getIndicesCount(), GL_UNSIGNED_SHORT, 0);
#ifdef NORMALS_DEBUG_DRAWING
glBindBuffer(GL_ARRAY_BUFFER, _normalGeomBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _normalIndexBufferID);
totalSize = vertexSize + colorSize;
glVertexAttribPointer(_shaderAtributePosition, vertexSize, GL_FLOAT, GL_FALSE, sizeof(Vertex::oneElement()) * totalSize, 0);
glVertexAttribPointer(_shaderAttributeColor, colorSize, GL_FLOAT, GL_FALSE, sizeof(Color::oneElement()) * totalSize, offset);
glDrawElements(GL_LINES, 2 * _field->getVertexCount(), GL_UNSIGNED_SHORT, 0);
#endif
I understand for example if I merge this two draw calls in one (And use same VBOs for parallelepiped and normals - everything will be fine). But it will be uncomfortable because I use lines and triangles.
There are should be another way for fixing Z order. I can't believe that complex scene (for example sky, land and buildings) draws via one draw call.
So, what I am missing?
Thanks in advance.
If you are rendering into a window surface you need to request depth as part of your EGL configuration request. The depth renderbuffer you have allocated is only useful if you attach it to a Framebuffer Object (FBO) for off-screen rendering.