top of page

Screenspace Reflections + Deferred Rendering

Updated: Apr 24, 2024

A C++/GLSL implementation of screenspace reflection using deferred rendering.


Screenspace reflections are pretty neat. With traditional rendering techniques, to compute the color of a pixel, we'd have to account for all the light sources hitting that point in world space to calculate its resulting color-- but what if the scene had 50, 100, 1 million lights? That's extremely expensive.


Suppose we separated important information like Geometry, Surface Normals, Roughness/*Metallic-ness*, and Albedo colors into individual maps. Then, for a given pixel, we can access the world space value for each of these categories.


For example, if we have

sample2D(u_GeometryMap, (vec2(10, 20))) --returns--> vec4(0, 50, 42, 1)

This tells us that the current pixel located at position 10, 20 on screen contains some geometry that is located at World Coordinates (x = 0, y = 50, z = 42). (Unfortunately, we can't actually print this in GLSL, but outputting the position map as color alone can tell us a lot about the values being returned).


Note how the world positions here correspond to the output RGB Color:


(x = 0, y = 0, z = 1) --> (0, 0, 1)

(x = 0, y = 1, z = 0) --> (0, 1, 0)

(x = 0, y = 0, z = 1) --> (1, 1, 0)

(x = 0, y = 0, z = 1) --> (1, 0, 1)


Now, we can do the same thing for the other components - Surface Normal, Metallic/Roughness, and Albedo.


G-Buffer - Surface Normal

G-Buffer - (RGB) R = Roughness, G = Metallic

G-Buffer - RGB Albedo

Now that we have all this information, let's put it all together. If we can raymarch per pixel from our camera into world space with a fixed step size, with each step, we can compare our current depth to the g buffer depth of the geometry we're hitting, if any. When we do find an intersection, we perform a binary search to narrow down our exact hit location and confirm whether this is in fact a true intersection.


Note that since the geometry G-Buffer contains information only for the geometry within the given map (i.e. on screen), if our reflective ray bounces off screen, we don't actually see any geometry, and therefore... no reflection.


(If we can't see it, it can't see us.)

One limitation of screenspace reflection

Notice how the top of the sphere isn't reflected. That's because our geometry map only has information for what's on screen. Since we can't see the top of the sphere, it won't appear in our reflection, either.


If we zoom out a bit, we'll see how the different reflections in this scene vary in blurriness. I did this by computing a Gaussian blur in a set number of tiers, and applying repeatedly to get different amounts of blur. We can apply this blur based on the roughness/metallic attributes of our surface to decide how blurry this reflection should be.


Specular, (perfectly reflective) no blur.

Somewhat rough, proportional blur / glossy reflection.

Diffuse material, blurriest.


While Gaussian blur slows down the program a bit, (it's a double for-loop per-pixel) we can use a smaller kernel, but apply it multiple times to get different levels of blur. In the below example, the leftmost panel is perfectly reflective. The right panel is fully diffuse. The 3 center panels exhibit increasing amounts of roughness, (aka intermediate levels of glossy reflection).


If we zoom out, there we go! Reflections in screen space!


It's important to note that step size and intersection tolerance may need to be adjusted based on your scene's size/geometry. For example, if your raymarching step size is too large, you may overshoot and miss geometry entirely. If your intersection tolerance is too high, you may capture intersections with geometry your ray did not actually hit. It's a bit of trial and error, but adjusting these thresholds will reduced undesired artifacts and give you cleaner reflections.

Screenspace reflection with varying levels of blur



Comentários


bottom of page