Techniques used in real-time rendering to simulate accuracy of offline rendering.
Lighting and Global Illumination
Baked Lighting
Precompute lighting data for static objects
Lightmaps
2D texture of light intensity and colour for surface of a static mesh, accounts for direct and indirect lighting
Ambient Occlusion maps
- Ambient occlusion: how exposed a point in a scene is to ambient lighting (light that comes from surrounding environment rather than direct light source), adds artificial shadows in constricted spaces
- Binary map: 0 or 1 for occluded or not
- Built through raytracing: cast set of rays around surface normal and count how many are occluded from light sources, attribute value (0 or 1) accordingly at that point
Shadow maps
Static geometry shadows baked into lightmaps/textures
Light probes
Light “sensors” to store precomputed lighting data at discrete points in space
- Dynamic objects sample nearby lightprobes and interpolate to get environment lighting info (indirect lighting, AO)
- Stored as spherical harmonics or cubemaps
- Spherical harmonics:
- Cubemaps:
- Diffuse lighting only, don’t react to real-time light changes
Reflection probes
Cubemap of nearby environment
- Render scene from 6 directions around probe at bake time
- Shader of reflective objects can sample the probe at runtime
- Dynamic reflection probes: re-render cubemap periodically
Screen-space techniques
Only use what’s currently available in the rendered frame
Screen-space Ambient Occlusion (SSAO)
Sample depth buffer around each pixel
- If surrounding pixels are close in depth, area is assumed occluded
- Adds soft shadows
Screen-space Reflection (SSR)
Use depth and colour buffer to simulate reflection by tracing rays in screen space
TODO
Screen-space Global Illumination (SSGI)
Simulate indirect lighting (first diffuse light bounce) using only screen info
- Sample nearby pixels within hemisphere around the normal, they act as indirect lighting source if they are bright, and use depth buffer comparison to check for occlusion
- Adds colour bleeding, diffuse bounce, responds to movement
Real-time Global Illumination
Lumen (Unreal Engine 5)
Hybrid GI system using screen space, signed distance fields (SDFs) and surface caching
- Screen-space tracing
- First bounce is attempted using visible geometry (similar to SSR or SSGI)
- If bounce point is visible on screen, fast and accurate
- Distance field tracking
- When screen-space fails, fall back to tracing rays through distance fields: simplified 3D volumes representing scene geometry
- Captures off-screen and large scale bounce
- Surface caching
- Store cached representation of scene’s lighting info, sampled from multiple rays over time
- Used for lighting surfaces not directly visible on screen
- Temporal accumulation
- Denoise and stabilise lighting over multiple frames
- Why is it relevant?
- No baking required
- Supports moving lights
DDGI (Dynamic Diffuse Global Illumination)
TODO
Selective Raytracing Passes
Selected ray tracing passes (for shadows, reflections, indirect lighting) (often leveraging RTX)
RTX
- NVIDIA’s real-time ray tracing technology
- Hardware: NVIDIA RTX GPUs
- Software: NVIDIA RTX, DLSS, Reflex
- RT cores: dedicated hardware to accelerate ray traversal and BVH intersection test
Look more into this, RTXGI and shit
Level of Detail and Asset Complexity
LOD Systems
Dynamically swap mesh or material versions based on distance to the camera
- Create multiple versions of a mesh (LOD0, LOD1, LOD2, etc.) with progressively fewer polygons
- Engine switches at runtime based on distance from camera, screen size of the object and performance budget
- Can also apply to materials, textures, rigs, …
Culling
Skip rendering objects that are outside of X
- Frustum Culling: skip objects outside the camera view
- Occlusion Culling: skip objects behind other objects
- Distance Culling: skip objects beyond a certain distance
- Portal Culling: used in interiors with rooms/zones
Textures and Materials
Material optimisation
- Shader LOD: simpler shader versions for distant objects
- Material instancing: reuse same shader with different parameters
- Packed textures: store several different non-color maps (roughness, metallic, AO) into the R, G, B and A channels of a single image -> sample one RGBA texture instead of 3 separate single channel ones -> fewer texture lookups and lower memory usage
- Mipmapping: user lower-res texture versions for distant surfaces
Fake Subsurface Scattering
- Translucency map: use a mask that brightens thin areas, multiply by back lighting
- Screen-space blur: blur lighting or albedo buffer slightly -> get that soft diffuse look
- Pre-integrated skin shading (UE4): use lookup tables of precomputed subsurface scattering wrt view/light direction -> pre-integrate the math into a 2D texture and sample it
Fake volumetrics
- Volumetric fog
- Particle-based volume
- Fake light shafts (god rays)
- Volumetric ray marching
TODO
Temporal and Spatial Fidelity
- Spatial fidelity: how much detail you show in a single frame
- Temporal fidelity: how stable and smooth the image looks over time
Temporal Anti-Aliasing (TAA)
Smooth data over time
- Main idea
- Render the current frame with jittered sampling
- Jittered sampling: slightly offsetting the camera’s projection matrix each frame by a tiny sub-pixel amount
- Compare with reprojected data from the previous frame (based on motion vectors)
- Motion vector: describes how much each pixel or object has moved between previous and current frames, 2D vector per pixel (horizontal/vertical movement)
- Use motion vector to map data (colour buffer, lighting data, …) from previous frame to data from current one
- Blend the two together
- Render the current frame with jittered sampling
- Cons
- Can cause ghosting: outdated visual data from a previous frame incorrectly blended into the current frame
- Blur thin features and motion
Deep Learning Super Sampling (DLSS)
Render at a lower resolution then upscale to high-res with a neural network
- Proprietary NVIDIA model
- Takes as additional inputs motion vectors, depth and jitter
- TODO
FidelityFX Super Resolution (FSR)
AMD’s open upscaling solution: TAA + motion vector based upscaling
- Works on any GPU
- Lower image quality than DLSS
- More prone to ghosting and blurring
- TODO