failing create a texture in a thread

180 views Asked by At

In an application i am creating, i am using several textures which i load on demand. Till now i created them in the main process and everything works fine. Now i wanted to load them in a separate thread. So i call the function for loading and binding the texture with beginthread The texture is loaded, but GL fails with SHADER_ERROR (1282). I assume OpenGL needs probably an initial init, but i am clueless

I am codding in C++, compiling with GCC on an WinX64 using GL3 & GLFW and STB for image processing

here the code

    GLuint  load_map(const char*filename)
    {
        GLuint texture;
          glGenTextures(1, &texture);
          glBindTexture(GL_TEXTURE_2D, texture); // all upcoming GL_TEXTURE_2D operations now have effect on this texture object

          // set texture filtering parameters
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
          glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
            // load image
        int width, height, nrChannels;                                                                   
            // 
        unsigned char *data = stbi_load((filename), &width, &height, &nrChannels, STBI_rgb);
          if (data)
          {                                                   LOG1( "LOADED<",width,height); LOG1( "LOADED<",temp,found);
            glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
        //        glGenerateMipmap(GL_TEXTURE_2D);
          }
          else LOG1( "Failed to load texture","","");

          stbi_image_free(data);            LOG1("load_map","ID>",texture)
          }
          GLenum err =glGetError();    LOG1("load_map","error",err)
return texture;
    }

LOG1 are just logging helper

1

There are 1 answers

1
Ripi2 On BEST ANSWER

Before any gl-call is used, the context must be set as current to the thread where those gl-calls are used.

For GLFW, use glfwMakeContextCurrent(GLFWwindow *window)
Yes, even with the same window value you need to set as current, due to a different thread.

Now, if you think of different threads, all running at once to use GL, and trying to set as current the same context... NO, that won't work.

You could use several context's and share them at window creation. But GLFW is simple, AFAIK you can't create several context's for the same window.

You could by-pass GLFW and create your own context's, shared with the GLFW one... I don't know how to do this.

You could not use GLFW and handle contexts and windows with some other library or on your own.

The point is that shared contexts share the textures. So you can upload in one context and the texture is available in all the shared context with this context.

But... there's always a "but"... most of current graphics cards don't show a performance in using several contexts. Just a few allow to upload in several contexts at once, or read and draw simultaneusly. So, the multi-context advantage is not that great.

What you can do in your multi-thread app is read each image from disk to RAM in a dedicated thread. And pass it to the GPU when it's ready, only in the main thread.