I would like to implement ray tracing in opengl and glsl to render models loaded from .obj files but I don't understand how exactly do this. I've used obj files before but I used rasterization to render them. So far I've implemented simple ray tracer using fragment shader that render some simple shapes (planes, spheres, boxes). The thing is that in ray tracing I'm calculating intersections with objects but these are defined also in fragment shader.
This is how I've done rendering using rasterization: After loading vertices data (position, normal, uvs) I store them into VBO and bind them to VAO. These I send to vertex shader and transform them by multiplying them with MVP matrices. Then I send transformed vertices to fragment shader when I shade them. But this is the part I don't understand how to implement by ray tracing, because now I have transformed vertices as inputs to fragment shader and that means I don't know how to calculate ray intersections with mesh triangles. So, how to do this?
Well, essentially for every fragment you perform a ray-intersection test with the whole scene. So the challenge is encoding the scene in some data structure that can be accessed from the fragment shader and offers enough space for it. An immediate choice would be three 1D textures with 3-components of the same size, where for each texel index the triple of the three textures represents a triangle. Then for each ray cast into the scene (i.e. for every fragment) iterate over the triangles.
If you want to get a bit more fancier you could use three 3D textures instead, placing triangle data into the right region of the texture to limit the texels to iterate over, use their mipmap levels for LOD and use sparse texture storage to reduce the memory footprint.