Raytracing via diffusion algorithm

395 views Asked by At

Many certain resources about raytracing tells about:

"shoot rays, find the first obstacle to cut it"

"shoot secondary rays..."

"or, do it reverse and approximate/interpolate"

I didnt see any algortihm that uses a diffusion algorithm. Lets assume a point-light is a point that has more density than other cells(all space is divided into cells), every step/iteration of lighting/tracing makes that source point to diffuse into neighbours using a velocity field and than their neighbours and continues like that. After some satisfactory iterations(such as 30-40 iterations), the density info of each cell is used for enlightment of objects in that cell.

Point light and velocity field:

enter image description here

But it has to be a like 1000x1000x1000 size and this would take too much time and memory to compute. Maybe just computing 10x10x10 and when finding an obstacle, partitioning that area to 100x100x100(in a dynamic kd-tree fashion) can help generating lighting/shadows for acceptable resolution? Especially for vertex-based illumination rather than triangle.

Has anyone tried this approach?

Note: Velocity field is here to make light diffuse to outwards mostly(not %100 but %99 to have some global illumination). Finite-element-method can make this embarassingly-parallel.

Edit: any object that is hit by a positive-density will be an obstacle to generate a new velocity field around the surface of it. So light cannot go through that object but can be mirrored to another direction.(if it is a lens object than light diffuse harder through it) So the reflection of light can affect other objects with a higher iteration limit

Same kd-tree can be used in object-collision algorithms :)

Just to take as a grain of salt: a neural-network can be trained for advection&diffusion in a 30x30x30 grid and that can be used in a "gpu(opencl/cuda)-->neural-network ---> finite element method --->shadows" way.

1

There are 1 answers

0
CulDeVu On BEST ANSWER

There's a couple problems with this as it stands.

The first problem is that, fundamentally, a photon in the Newtonian sense doesn't react or change based on the density of other photons around. So using a density field and trying to light to follow the classic Navier-Stokes style solutions (which is what you're trying to do, based on the density field explanation you gave) would result in incorrect results. It would also, given enough iterations, result in complete entropy over the scene, which is also not what happens to light.

Even if you were to get rid of the density problem, you're still left with the the problem of multiple photons going different directions in the same cell, which is required for global illumination and diffuse lighting.

So, stripping away the problem portions of your idea, what you're left with is a particle system for photons :P

Now, to be fair, sudo-particle systems are currently used for global illumination solutions. This type of thing is called Photon Mapping, but it's only simple to implement a direct lighting solution using it :P