Difference between Doom and Quake 3D rendering

12.3k views Asked by At

I have been studying (old) 3D rendering techniques for the past weeks and think that I now have a fair understanding of the way 3D rendering in Doom works. It uses raycasting to render the 3D scene, uses sprites for objects and thus is not "true" 3D. It also does not allow for real look up/down, only through Y-shearing.

Quake is ID's first "true" 3D engine, has objects that can be viewed from different angles and allows looking up and down.

Now I hear this "true" 3D a lot when studying these techniques, but I can't get a clear explanation of what exactly this true 3D means. How is Quake rendering different from Doom rendering?

Does the Quake world use 3D vertices and are they all projected instead of raycasting for intersections?

I would love to hear a clear explanation of the differences!

P.S. I know the source code for Quake is available, but id software's FTP has been down for weeks and I can't find it anywhere else. If anyone knows where to find it elsewhere, let me know.

3

There are 3 answers

1
Alexey Frunze On BEST ANSWER

I don't think there's a single universal definition. Perhaps the truest 3d possible is one that is indistinguishable from what you see in real life, meaning photographically-realistic quality of rendering of objects, lighting, shades, fully taking into account changes as they happen in real time, creating the sense of depth and also changing with the viewer's position, their use of left, right or both eyes and changing focus. Currently we don't have such a mighty technology in our computers and TVs. Real-life experience so far beats everything available on the market for the general consumer.

Compared to Doom, Quake had advanced lighting (via light maps), shading, better texture mapping (via MIP maps, reducing the aliasing artifacts observable when looking at distant objects) and 3d objects instead of sprites. Its game space wasn't inherently flat as Doom and rendering allowed looking at things at different angles as you say.

More details can be found through here and here.

If you're interested in how all these cool Quake things can be done, read Michael Abrash's books, e.g. Ramblings in Realtime and Graphics Programming Black Book.

0
Simon Howard On

Just to point out, Doom's rendering engine is not a raycaster (as Wolfenstein 3D is) - that is, it doesn't work by casting a ray for each column of the screen. Rather, it is a BSP engine. The geometry of the level is divided into a binary tree, and that tree is walked down to render the scene.

At each point in the tree there are two subtrees, and these are walked in an order that depends on the location of the player in relation to the line that divides them. This makes sure that they are rendered in order, so that far away walls are occluded by closer ones.

True vertical look up and down isn't possible for the reasons you explain - games such as Heretic and Hexen which added the ability to do this used Y-shearing, which is essentially a hack. It's like extending the vertical height of the screen and then providing a window into it that moves up and down.

You can find more information on the Doom wiki and on Wikipedia - both are derived from a write-up I wrote on everything2 years ago.

Hope this helps!

2
subcog On

The doom engine has major limitations. Each map is essentially 2d, and the 3d appearance is largely an illusion. Each point on the map has only one floor and one ceiling, so there can never be over-passes. There are no ramps, only stairs and walls. All vertical lines are drawn exactly vertically, and there is no looking up and down.