I have an application where I load a series of medical images (jpg), create a texture object for each image and display those textures on 3D planes. The amount of images depends in the CT scanner's resolution but my prototype should work with up to 512.
As an additional task, I want to use those textures to perform volumetric rendering on a separate 3D canvas.
All the algorithms that tackle the lack of 3D textures in WebGL have as a "silent" prerequisite that a texture atlas in an image format already exists.
In my case however I don't have such an atlas.
Assuming that the hardware supports the resulting size of this 2D texture-atlas, how could I tailor the already loaded textures into one single 2D texture and feed it to the shader?
I thought about merging the data of each image into a single array and with it create THREE.DataTexture. But I couldn't find a way to read the image's data from the texture that uses it. Is there some another way to extract the data of the loaded image?
The easiest way is probably to load your textures into a 2d canvas to build your atlas.
Let's assume we have a function that downloads 512 textures and we want to put them in a grid of 32 by 16
Some optimizations: You might change the code so you make the canvas at the beginning and as each image loads draw it into the 2d canvas at the correct place. The advantage will be that the browser won't need to keep all 512 images in memory. It can discard each one right after you've drawn it.
Alternatively you can turn each image into a texture and use a texture attached to a framebuffer to render the image texture into the atlas texture. That's more work. You need to make a simple 2d pair of shaders and then render each image texture to the atlas texture at the correct place.
The only reason to do that off the top of my head is if the textures have 4 channels of data instead of 3 or less as there's no way to use all 4 channels with the 2d canvas since 2d canvas always uses premultiplied alpha.
Drawing a texture into a texture is the same as drawing period. See any example that draws into a texture.
The short version in three.js is,
make a render target
Set up a plane and a material to render with, put it in a scene. For each image texture set the material to use that image texture and setup whatever other parameters to make it draw the quad where you want it to be drawn into the atlas texture. I'm guessing an orthographic camera would make that easiest. Then call render with the render target.
That will render to
rtTexture
.When you're done
rtTexture
is your atlas texture. Just use the texture like any texture. Assign it to a material.