This question has a strong relation with my other question: Isometric rendering without tiles, is that goal reachable?
I want to depth sort objects in an isometric world (html5 canvas). The world is not tiled, so every item in the world can be placed on each x, y, z coordinate. Since it's not a tiled world, depth sorting is hard to do. I even want that if items intersect, that the visible parts are drawn as if it were intersecting parts in a fully 3d world. As people answered in my other question, this can be done by representing each 2d image as a 3d model. I want to go on with the solution given in the following comment on that question:
You don't have to work in 3D when you use webGL. WebGL draws polygons and is very quick at drawing 2D images as 4 verts making a small fan of triangles. You can still use the zbuffer and set corners (verts) to the z distance. Most of the 2D game libraries use webGL to render 2D and fallback to canvas if webGL is not there. There is also a webGL implementation of the canvas API on github that you could modify to meet your needs. (comment link)
So, you could see the 'logic' as 3d models. The z-buffer of webGL provides correct rendering. The rendering pixels itself are pixels of the 2d images. But I don't know how to do this. Could someone further explain how to get this done? I read a lot of information, but that's all about real 3d.
Could could use depth sprites as you pointed out in your other question (ps, you really should put those images in this question)
To use depth sprites you need to enable the
EXT_frag_depth
extension if it exists. Then you can write togl_fragDepthEXT
in your fragment shader. Making depth sprites sounds like more work to me than making 3D models.In that case you just load 2 textures per sprite, one for color, one for depth and then do something like
You'd set
depthOffset
anddepthScale
to something likeThat assumes each value in the depth texture is less per depth change.
As for how to draw in 2D in WebGL see this article.
Here's an example that seems to work. I generated the image because I'm too lazy to draw it in photoshop. Manually drawing depth values is pretty tedious. It assumes the furthest pixel in the image of depth values of 1, the next closest pixels have a depth value of 2, etc.
In other words if you had a small 3x3 isometric cube the depth values would be something like
The top left is what the image looks like. The top middle is 2 images drawn side by side. The top right is 2 images drawn one further down in y (x, y is the iso-plane). The bottom left is two images one drawn below the other (below the plane). The bottom middle is the same thing just separated more. The bottom right is the same thing except drawn in the opposite order (just to check it works)
To save memory you could put the depth value in the alpha channel of the color texture. If it's 0 discard.
Unfortunately according to webglstats.com only 75% of desktops and 0% of phones support
EXT_frag_depth
. Although WebGL2 requires support forgl_FragDepth
and AFAIK most phones supportOpenGL ES 3.0
on which WebGL2 is based so in another couple of months most Android phones and most PCs will be getting WebGL2. iOS on the other hand, as usual, Apple is secret about when they will ship WebGL2 on iOS. It's pretty clear they never plan to ship WebGL2 based on the fact that there hasn't been a single commit to WebKit for WebGL2 in over 2 years.For systems that don't support WebGL2 or
EXT_frag_depth
on WebGL1 you could simulateEXT_frag_depth
using vertex shaders. You'd pass the depth texture to a vertex shader and draw withgl.POINTS
, one point per pixel. That way you can choose the depth of each point.It would work but it might end up being pretty slow. Possibly slower than just doing it in JavaScript directly writing to an array and using
Canvas2DRenderingContext.putImageData
Here's an example
Note that if it is too slow I don't actually think doing it in JavaScript in software is guaranteed to be too slow. You could use asm.js to make a renderer. You setup and manipulate the data for what goes where in JavaScript then call your asm.js routine to do software rendering.
As an example this demo is entirely software rendered in asm.js as is this one
If that ends up being too slow one other way would need some kind of 3D data for your 2D images. You could just use cubes if the 2D images are always cubic but I can already see from your sample picture those 2 cabinets require a 3D model because the top is few pixels wider than the body and on the back there's a support beam.
In any case, assuming you make 3D models for your objects you'd use the stencil buffer + the depth buffer.
For each object
turn on the
STENCIL_TEST
andDEPTH_TEST
set the stencil func to
ALWAYS
, the reference to the iteration count, and the mask to 255set the stencil operation to
REPLACE
if the depth test passes andKEEP
otherwisenow draw your cube (or whatever 3d model represents your 2D image)
At this point the stencil buffer will have a 2D mask with
ref
everywhere the cube was drawn. So now draw your 2D image using the stencil to draw only where the cube was successfully drawnDrawing the Image
Turn off the
DEPTH_TEST
Set the stencil function so we only draw where the stencil equals
ref
set the stencil operation to
KEEP
for all casesdraw the 2D image
This will end up only drawing where the cube drew.
Repeat for each object.
You might want to clear the stencil buffer after every object or after every 254 objects and make sure
ref
is always between 1 and 255 because the stencil buffer is only 8 bits meaning that when you draw object 256 it will be using the same value as object #1 so if there are any of those values left in the stencil buffer there's a chance you might accidentally draw there.