I'm working on a simulation for clouds (actual clouds) where the clouds are simulated by 3D points, then projected into a 2D heatmap, about 640x480 units big. The number of points is about 50k, which is as small as I can go without the simulation breaking, but I can't seem to find a way to perform this with any speed (it usually takes 3-5 seconds of runtime)
I suppose my question is, is it feasible for an average computer to be able to do this yet? I usually underestimate how fast computers are nowadays, but I might be overestimating them in this case. I haven't optimized the simulation yet, but if it's flat-out not possible, it'd be good to know and save the trouble now.
If it is possible, is there any technique that might prove useful for making the conversion from point data to heatmap fast enough to update 60 times a second? It really is just looking at the point data and writing to a 2D array the results after a transformation, so it's mostly bound to memory lookup I think.
It is definitely feasible, probably even if the calculation are done by the CPU. Ideally you should be using the GPU. The APIs needed are either OpenCL or since you are rendering the results you might want to make use of Compute Shaders.
Both techniques allow you to write a small program (shader) that works on a single element (point). These all get run in parallel on the GPU which should allow them to run really fast.