Is there a performance penalty for enabling OpenGL ES extensions?

477 views Asked by At

In general is there a performance penalty just by enabling OpenGL ES extensions in a shader?

I am working on some code that injects the enabling of various extensions into the shader source code for all shaders regardless of whether the specific shader needs that extension. Is this likely to have a performance penalty? Conceptually it seems unlikely that it would.

I am specifically interested in iOS.

1

There are 1 answers

2
MuertoExcobito On BEST ANSWER

In terms of shader execution, simply requiring an extension does not have any effect. The shader is compiled into a GPU compatible format, and #extension (being a pre-processor token) would not have any effect on the generated output. You can verify the equivalence of the generated output on other GLES platforms (eg. Android), where glGetProgramBinaryOES is available, and comparing the generated shader with an without the #extension preprocessor.

The GLSL parser will have some additional work to do parsing the shader. It's unlikely that this will be a significant amount of extra processing time for a single shader, but it does depend on the driver's shader compiler. If you have an extremely large number of shaders, this cumulative extra work may become significant. In this case, there could be a benefit to removing #extension statements when they are not actually required by your shader.

Specifically for iOS, shader compilation is hashed, such that binaries are automatically stored for subsequent compilations of shaders, even across app sessions. This means that you only 'pay' the cost of the parser on the first run of your app (unless your shaders are dynamic), making the performance of the parser less of a concern. This is not true on Android, where program binaries must be stored and loaded explicitly with glGetProgramBinaryOES and glProgramBinaryOES to get the same behavior.

That said, requiring an extension implies that you will be using it, which could have a large effect on the shader depending on which extension it is, and what you're doing with it. The list of iOS GLES extensions includes a few that are enabled by the #extension statement (EXT_draw_instanced, EXT_shader_texture_lod, EXT_shader_framebuffer_fetch, ...). It's easy to imagine that a shader that samples the framebuffer with the EXT_shader_framebuffer_fetch extension would take longer to execute than an equivalent shader that doesn't.