Is it possible to get the index of 'invisible' vertices in python-opengl like gloo, glumpy?

247 views Asked by At

Is it possible to get the index of 'invisible' vertices in python-opengl like gloo, glumpy? For example, when I draw a 3D sphere in scenegraph and rotate the object using turn-table camera, the half of vertices are invisible, but the index will be changed whenever I rotate.

If I turn-on 'cull_face' option, the OpenGL will not draw them, but is there any possibility to get the indices of those vertices, 'undrawn' or invisible due to blocked by other vertices?

1

There are 1 answers

3
derhass On BEST ANSWER

but is there any possibility to get the indices of those vertices, 'undrawn' or invisible due to blocked by other vertices?

OpenGL will not offer such functionality in a direct way. It still can be achieved, of course. But you need to implement that by yourself. Here are some ideas:

  • After drawing the thing, use OpenGL occlusion queries by rendering each vertex as a separately queried point - I would not recomment it performance-wise, but it could be done.
  • Since you basically are interested in face culling, just calculate the face culling yourself. For each triangle, you just need to calculate the normal vector, which is just the cross product of two edges, and then just check the if the angle between the view direction vector and the normal is above or below 90 degrees, so it is a simple dot product. This approach can easily be ported to GPU compute shaders, and will trivially run in parallel as each triangle can be tested independently - and it should also be implemented on the GPU because you can use its power also to transform the vertices with exactly the same matrices as before, and since this results in a viewing direction which is constant (0,0,1) in window space, the dot product will yield only the z coordinate of the dot product, which is just the signed area of the 2D projection of the vertex coordinates in window space.
  • You could also do simple ray-casting by checking a ray from the camera to each vertex for intersections. You can easily apply the projection matrix here, so each view ray will become orthogonal, and you can simply test against the z buffer then. This approach could also be implemented on the GPU.