Problem using two OpenGL-Contexts in two threads (one per thread)

415 views Asked by At

I have been using one OpenGL-context in one thread (very simplyfied) like this:

int main
{
    // Initialize OpenGL (GLFW / GLEW)
    Compile_Shaders();
    while (glfwWindowShouldClose(WindowHandle) == 0)
    {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glfwPollEvents();
        
        Calculate_Something(); // Compute Shader
        glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
        GLfloat* mapped = (GLfloat*)(glMapNamedBuffer(bufferResult, GL_READ_ONLY));
        memcpy(Result, mapped, sizeof(GLfloat) * ResX * ResY);
        glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);

        Render(Result);
        ImGui_Stuff();

        glfwSwapBuffers(WindowHandle);
    }
}

This works well until the calculations of the compute shader take longer. Then it stalls the main-loop. I have been trying to use glFenceSync but glfwSwapBuffers always has to wait until the compute shader is done.

Now I tried another approach: Generating a seperate OpenGL-context in another thread for the compute shader like this:

void ComputeThreadFunc()
{
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 5);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWwindow* WindowHandleCompute = glfwCreateWindow(50, 5, "Something", NULL, NULL);
    if (WindowHandle == NULL)
    {
        std::cout << "Failed to open GLFW window." << std::endl;
        return;
    }

    GLuint Framebuffer;
    glfwMakeContextCurrent(WindowHandle);
    glGenFramebuffers(1, &Framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);

    // Compile compute shader

    while (true)
    {
        Calculate_Something();
        glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
        GLfloat* mapped = (GLfloat*)(glMapNamedBuffer(bufferResult, GL_READ_ONLY));
        memcpy(Result, mapped, sizeof(GLfloat) * ResX * ResY);
        glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
        
        Sleep(100); // Tried different values here to make sure the GPU isn't too saturated
    }
}

I changed the main-function to:

int main
{
    // Initialize OpenGL (GLFW / GLEW)
    std::thread ComputeThread = std::thread(&ComputeThreadFunc);
    while (glfwWindowShouldClose(WindowHandle) == 0)
    {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glfwPollEvents();

        Render(Result);
        ImGui_Stuff();

        glfwSwapBuffers(WindowHandle);
    }
}

Now what I see always seems to switch between two images (maybe the first two after startup). I think the compute-shader/-thread gives correct results (can't really check, because the main-loop doesn't display it).

What am I missing here? The two threads don't use shared ressources/buffers (that I know of). I generated a seperate framebuffer for the compute-thread. Do I have to generate additional buffers (all the buffers the compute-shader needs are of course generated) or synchronize somehow (the result is stored in a C++-array, so the OpenGL-buffers can be completely seperate)?

Should this approach work in general? And if so, are there general considerations that I did not take into account? If additional code is needed, please let me know.

Edit:

So, I just played around with Sleep(5000) to see when exactly the above error occurs. When I place this call before glMapNamedBuffer the main window seems to work for 5 seconds. Placed after this call it immediatly breaks. Is there anything special about this call I have to consider with multiple OpenGL-contexts?

2

There are 2 answers

7
BDL On

Window creation with GLFW is only possible in the main thread as stated in the "Thread Safety" section of the GLFW docs

Some other methods like glfwMakeContextCurrent may also be called from secondary threads, so what you have to do is to create all windows from the main thread, but then use one of the windows in the calculation thread.

Basic structure:

int main()
{
  //Create Window 1
  auto window1 = ...  

  //Create Window 2
  auto window2 = ...

  //Start thread
  std::thread ComputeThread = std::thread(&ComputeThreadFunc, window2);

  //Render onto window1
  glfwMakeContextCurrent(window1);

  while (glfwWindowShouldClose(window1) == 0)
  {
      glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
      glfwPollEvents();

      Render(Result);
      ImGui_Stuff();

      glfwSwapBuffers(window1);
  }
}
void ComputeThreadFunc(GLFWWindow* window2)
{
    GLuint Framebuffer;
    glfwMakeContextCurrent(window2);
    glGenFramebuffers(1, &Framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);

    // Compile compute shader

    while (true)
    {
        Calculate_Something();
        glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
        GLfloat* mapped = (GLfloat*)(glMapNamedBuffer(bufferResult, GL_READ_ONLY));
        memcpy(Result, mapped, sizeof(GLfloat) * ResX * ResY);
        Sleep(100); // Tried different values here to make sure the GPU isn't too saturated
    }
}

Also note, that currently the buffer mapping is never unmapped, so you should probably call glUnmapNamedBuffer after the memcpy line. Or you may use persistently mapped buffers.

0
Paul Aner On

OK, I finally got this to work.

As mentioned in the edit above, I could trace the problematic call to glMapNamedBuffer (also glGetNamedBufferSubData produced the same error) in the compute thread. Without those calls the main thread worked fine, but of course with the undesired side effect that I did not get the results from the compute shader.

I now placed this call in the main thread. For that to work, one must first unbind the buffer with a call to glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0) in the compute thread and then bind it in the main thread. This has to be done after the compute shader is done, so I put glMemoryBarrier before the unbind call - which did not work. Only after I put glFinish here, it worked.

Two questions remain and if anybody could give me an answer, it would be greatly appreciated:

Why does a call to glMapNamedBuffer in the compute thread break the main thread?

Why does glfinish work while glMemoryBarrier does not? Both should wait, until the compute shader is done, shouldn't they?