I'm working on a ray tracer in Python and I'm trying to implement samples per pixel (SPP) to reduce the quality of the rendered images. However, when I increase the number of samples per pixel, the image quality actually improves, which is the opposite of what I want.
Here's what I've tried: I modified my rendering engine to cast multiple rays per pixel and average the results. I'm using a fixed number of samples per pixel and randomly selecting a subset of those samples for each pixel to introduce noise.
However, regardless of the number of samples per pixel I use, the image quality seems to improve rather than degrade. I suspect that my implementation is at the maximum SPP and I'm not sure how to proceed from here.
Here's a simplified version of my rendering engine code:
class RenderEngine:
MAX_DEPTH = 5
MIN_DISPLACE = 0.0001
def render(self, scene):
f = (scene.target - scene.camera).normalize()
r = f.cross_product(scene.up).normalize()
u = r.cross_product(f).normalize()
angle = tan(pi * 0.5 * scene.fov / 180)
canvas = ImageCanvas(scene.width, scene.height)
for j in range(0, scene.height, 3):
for i in range(0, scene.width, 3):
canvas.set_pixel(i, j, self.ray_trace(Ray(scene.camera, (r * ((2 * (i + 0.5) / scene.width - 1) * angle * scene.width / scene.height) + u * ((1 - 2 * (j + 0.5) / scene.height) * angle) + f).normalize()), scene))
return canvas.image
def ray_trace(self, ray, scene, depth=0):
obj_hit = None
for obj in scene.objects:
dist = obj.intersects(ray)
if dist is not None and (obj_hit is None or dist < dist_hit):
dist_hit = dist
obj_hit = obj
if obj_hit is None:
return Vector()
hit_pos = ray.origin + ray.direction * dist_hit
normal = obj_hit.normal(hit_pos)
material = obj_hit.material
color = Vector()
color2 = material.ambient * Vector(1,1,1)
for light in scene.lights:
to_light = Ray(hit_pos, light.position - hit_pos).direction
color2 += (material.color* material.diffuse* max(normal.dot_product(to_light), 0)) + (light.color* material.specular* max(normal.dot_product((to_light + scene.camera - hit_pos).normalize()), 0) ** 50)
if depth < self.MAX_DEPTH:
return color + color2 + (self.ray_trace(Ray((hit_pos + normal * self.MIN_DISPLACE), (ray.direction - 2 * ray.direction.dot_product(normal) * normal)), scene, depth + 1) * obj_hit.material.reflection)
return color + color2
I'm looking for guidance on how to properly reduce image quality using samples per pixel in my ray tracer. Any suggestions on how to adjust my implementation or alternative approaches would be greatly appreciated.