I see that this question is quite popular, but I couldn't find one in the context of WebGPU.
This is the shader code I wrote for rendering the Mandelbrot set.
@vertex
fn vert_main(@location(0) pos: vec2<f32>) -> @builtin(position) vec4<f32> {
return (vec4<f32>(pos, 0, 1));
}
struct Input {
dummy: f32, // Ignore this
scalar: f32, // The zoom level
center: vec2<f32> // The origin point on the canvas
}
const colorCount = 500;
const maxIterations = 1000;
@group(0) @binding(0) var<uniform> inputs: Input;
@group(0) @binding(1) var<uniform> colors: array<vec4<f32>, colorCount>;
@fragment
fn frag_main(@builtin(position) fragCoord: vec4<f32>) -> @location(0) vec4<f32> {
let C = (fragCoord.xy - inputs.center) * inputs.scalar;
return colors[getMandelBrotIterations(C) % colorCount];
}
fn getMandelBrotIterations(C: vec2<f32>) -> i32 {
var Z = vec2<f32>(0.0, 0.0);
for (var i = 0; i < maxIterations; i++) {
Z = square(Z) + C;
if ((Z.x * Z.x + Z.y * Z.y) >= 4) {return i;}
}
return 0;
}
fn square(Z: vec2<f32>) -> vec2<f32> {
return vec2<f32>(Z.x * Z.x - Z.y * Z.y, 2 * Z.x * Z.y);
}
Apparently, this has the pixelation issue on zooming due to limited precision. I think even if there was a float64 type in WebGPU, the issue is still bound to happen at one point. So, what's the strategy behind achieving the infinite zoom as seen in those YouTube videos?