Mathematical view of z-fighting

133 views Asked by At

to understand how z-fighting is working i got this question.

Consider the point p = (0, 0, −500) given in the local coordinate system of the camera. Derive the depth values assigned to p in the different stages of the pipeline described above, i.e., derive the depth value of p in eye space, in normalized device coordinates ([−1, 1]), in the range [0, 1] and the final depth buffer value.With n = 200 and f = 1000 and m = 24

I think the secound step for this procedure is: z1=z * -(f+n)/(f-n) - 2fn/(f-n) from the prospective transformation matrix. After that z2= (1/2)*z1 + 1/2

But i dont know how the transformation should look like in the eye space and what the last step is.

i hope someone can help me :) `

1

There are 1 answers

0
derhass On

To get from object space ("local coordinate system") you have to take the model (object-to-world) and view (world-to-eye) transformations into account. Usually, these are affine transforms described by some matrices. The model and view transform might be composed to a modelView matrix, since the world space is not explicitely needed. And that is the way the old GL fixed-function pipeline worked.

Since it is not clear what exactly is given, I'll just assume we know the matrices or you can determine/compute them from whatever is given. Since you only need z_eye, you can just use the dot product of the third row of these matrices with your input vector p (as you already did in case of the projection).

After the projection, you got clip space and need to do the homogenous divide by w_clip - That means that calculating only z is not enough. You need the w coordinate also, as defined by applying the projection matrix. In the typical case, w_clip=-z_eye. But in the general case, you might get something else. That means that you might need x_eye, y_eye and w_eye also, since the model and view transforms might be not affine (very unlikely), play around with w (a crude way for scaling) or the projection direction is not identical to the z axis (still not very likely, but in theory perfectly possible).

After z_ndc=z_clip/w_clip, you need the viewport transform. OpenGL will transform the range [-1,1] to [0,1] here by default, and your question assumes the same. Finally, the value is converted to the final format. Since by default, a integer depth buffer is used, the range [0,1] is just linearily mapped to [0,max], and the fractional part ignored. Your queuestion seems to suggest that a 24-bit depth buffer is used, so max=2^24-1.