Create a 2D texture from existing textures

1.3k views Asked by At

I have an application where I load a series of medical images (jpg), create a texture object for each image and display those textures on 3D planes. The amount of images depends in the CT scanner's resolution but my prototype should work with up to 512.

As an additional task, I want to use those textures to perform volumetric rendering on a separate 3D canvas.

All the algorithms that tackle the lack of 3D textures in WebGL have as a "silent" prerequisite that a texture atlas in an image format already exists.

In my case however I don't have such an atlas.

  1. Assuming that the hardware supports the resulting size of this 2D texture-atlas, how could I tailor the already loaded textures into one single 2D texture and feed it to the shader?

  2. I thought about merging the data of each image into a single array and with it create THREE.DataTexture. But I couldn't find a way to read the image's data from the texture that uses it. Is there some another way to extract the data of the loaded image?

1

There are 1 answers

5
gman On BEST ANSWER

The easiest way is probably to load your textures into a 2d canvas to build your atlas.

Let's assume we have a function that downloads 512 textures and we want to put them in a grid of 32 by 16

var width = 128;  // just assuming this is the size of a single texture
var height = 128; 
var across = 32;
var down = 16;

asyncLoad512Images(useLoaded512Images);

function useLoaded512Images(arrayOfImages) {
  var canvas = document.createElement("canvas");
  canvas.width = width * across;
  canvas.height = height * down;
  var ctx = canvas.getContext("2d");

  // draw all the textures into the canvas
  arrayOfImagess.forEach(function(image, ndx) {
    var x = ndx % across;
    var y = Math.floor(ndx / across);

    ctx.drawImage(image, x * width, y * height);
  }

  // now make a texture from canvas.
  var atlasTex = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, atlasTex);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE,
                canvas);
}

Some optimizations: You might change the code so you make the canvas at the beginning and as each image loads draw it into the 2d canvas at the correct place. The advantage will be that the browser won't need to keep all 512 images in memory. It can discard each one right after you've drawn it.

var width = 128;  // just assuming this is the size of a single texture
var height = 128; 
var across = 32;
var down = 16;
var numImages = 32 * 16;
var numImagesDownloaded = 0;

// make a canvas to hold all the slices
var canvas = document.createElement("canvas");
canvas.width = width * across;
canvas.height = height * down;
var ctx = canvas.getContext("2d");

// assume the images are named image-x.png
for (var ii = 0; ii < numImages; ++ii) {
  loadImage(ii);
}

function loadImage(num) {
  var img = new Image();
  img.onload = putImageInCanvas(img, num);
  img.src = "image-" + num + ".png";
}

function putImageInCanvas(img, num) {
  var x = num % across;
  var y = Math.floor(num / across);

  ctx.drawImage(img, x * width, y * height);

  ++numImagesDownloaded;
  if (numImagesDownloaded === numImages) {
    // now make a texture from canvas.
    var atlasTex = gl.createTexture();
    gl.bindTexture(gl.TEXTURE_2D, atlasTex);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE,
                  canvas);
    ....
  }
}

Alternatively you can turn each image into a texture and use a texture attached to a framebuffer to render the image texture into the atlas texture. That's more work. You need to make a simple 2d pair of shaders and then render each image texture to the atlas texture at the correct place.

The only reason to do that off the top of my head is if the textures have 4 channels of data instead of 3 or less as there's no way to use all 4 channels with the 2d canvas since 2d canvas always uses premultiplied alpha.

Drawing a texture into a texture is the same as drawing period. See any example that draws into a texture.

The short version in three.js is,

make a render target

rtTexture = new THREE.WebGLRenderTarget( 
      width * across, height * down, { 
        minFilter: THREE.LinearFilter, 
        magFilter: THREE.NearestFilter, 
        format: THREE.RGBFormat,
        depthBuffer: false,
        stencilBuffer: false,
    } ); 
 rtTexture.generateMipmaps = false;

Set up a plane and a material to render with, put it in a scene. For each image texture set the material to use that image texture and setup whatever other parameters to make it draw the quad where you want it to be drawn into the atlas texture. I'm guessing an orthographic camera would make that easiest. Then call render with the render target.

renderer.autoClear = false;
renderer.render( sceneRTT, cameraRTT, rtTexture, false );

That will render to rtTexture.

When you're done rtTexture is your atlas texture. Just use the texture like any texture. Assign it to a material.