I have a graphics filter (shader) written in GLSL for WebGL 2.0. I would like it to accept a texture having one of three formats:
- 8-bit Uint
- 16-bit Uint
- 32-bit Float
As I understand from here, for 32-bit floats, I need to use RGBA32F, and access the values as floats between 0-1 using "sampler2D".
For 8-bit Uint, I can use "RGBA8" and access the values as floats between 0-1 using "sampler2D".
For 16-bit Uint, I need to use "RGBA16UI" and access values as integers between 0-65535 using "isampler2D".
Do I need to make two versions of my shader, one with "sampler2D" and one with "isampler2D" (in which I divide texture values by 65535)? Is there a way to access 16-bit Uints as floats between 0-1 using "sampler2D" in a shader? Maybe using some GL extension?