I am writing one application of virtual texturing where I am taking a texture of 16384*16384(width and height).
So, initially I create a empty texture of 16384/16384(width/height).
gl.texImage2d(...., width, height, gl.RGBA, ..., null)
I have multiple jpeg images of 1024*1024, so am able to fill the images properly without any issues.
But what I see is when I use
gl.texSubImage2D(.........., imageelement)
it takes 10-30 milliseconds
But if I use gl.texSubImage2D(.........., arraybuffer)
, it takes 0-2 milliseconds.
I have gone through this link and tried to change parameters in my application but there is no performance improvement.
What exactly is happening within GPU that WebGL does after taking image(jpgs/bmps/pngs) or arraybuffer(Uint8Array/Uint16Array) using gl.texImage2D or gl.texSubImage2d. Is there any conversion involved that takes the extra time.
Which browser?
What the browser does is up to the browser. In the worst case, it doesn't decoded the image yet. In other words it's a compressed JPG still, not an uncompressed bitmap.
Assuming it is a bitmap you ask the browser for RGBA/UNSIGNED_BYTE but it could be storing the image as RGB/UNSIGNED_BYTE so it has to convert the entire image or a sub-rectangle of it from one format to the other which means allocating space to do that, then doing it, then calling texImage2D or texSubImage2D, then deallocating the memory
In the best case the image is already in the GPU and the browser can use a shader to render part of that image into your texture effectively simulating texImage2D/texSubImage2D. I know Chrome has paths to do this. I believe, at least as of September 2020 that Firefox does not, nor Safari. Even if one or both have paths to do it they certainly have opaque conditions on whether or not they can use that path.
ImageBitmap
is supposed to help make those conditions more likely vsImage
which is less likely.