{"id":1233,"date":"2025-05-21T17:54:43","date_gmt":"2025-05-21T15:54:43","guid":{"rendered":"https:\/\/cammonte.com\/?page_id=1233"},"modified":"2025-05-21T18:30:10","modified_gmt":"2025-05-21T16:30:10","slug":"real-time-vs-offline-rendering","status":"publish","type":"page","link":"https:\/\/cammonte.com\/index.php\/real-time-vs-offline-rendering\/","title":{"rendered":"Real-time vs Offline Rendering"},"content":{"rendered":"\n<p>Techniques used in real-time rendering to simulate accuracy of offline rendering.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Lighting and Global Illumination<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Baked Lighting<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Precompute lighting data for static objects<\/mark><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lightmaps<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>2D texture of light intensity and colour for surface of a static mesh, accounts for direct and indirect lighting<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Ambient Occlusion maps<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Ambient occlusion<\/mark><\/strong>: how exposed a point in a scene is to ambient lighting (light that comes from surrounding environment rather than direct light source), adds artificial shadows in constricted spaces<\/li>\n\n\n\n<li><strong>Binary map<\/strong>: 0 or 1 for occluded or not<\/li>\n\n\n\n<li><strong>Built through raytracing<\/strong>: cast set of rays around surface normal and count how many are occluded from light sources, attribute value (0 or 1) accordingly at that point<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Shadow maps<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>Static geometry shadows baked into lightmaps\/textures<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Light probes<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>Light &#8220;sensors&#8221; to store precomputed lighting data at discrete points in space<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Dynamic objects sample nearby lightprobes and interpolate to get environment lighting info (indirect lighting, AO)<\/li>\n\n\n\n<li>Stored as spherical harmonics or cubemaps\n<ul class=\"wp-block-list\">\n<li><strong>Spherical harmonics<\/strong>:<\/li>\n\n\n\n<li><strong>Cubemaps<\/strong>:<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Diffuse lighting only, don&#8217;t react to real-time light changes<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Reflection probes<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>Cubemap of nearby environment<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Render scene from 6 directions around probe at bake time<\/li>\n\n\n\n<li>Shader of reflective objects can sample the probe at runtime<\/li>\n\n\n\n<li>Dynamic reflection probes: re-render cubemap periodically<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Screen-space techniques<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Only use what&#8217;s currently available in the rendered frame<\/mark><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Screen-space Ambient Occlusion (SSAO)<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>Sample depth buffer around each pixel<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If surrounding pixels are close in depth, area is assumed occluded<\/li>\n\n\n\n<li>Adds soft shadows<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Screen-space Reflection (SSR)<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>Use depth and colour buffer to simulate reflection by tracing rays in screen space<\/strong><\/p>\n\n\n\n<p><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\">TODO<\/mark><\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Screen-space Global Illumination (SSGI)<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong>Simulate indirect lighting (first diffuse light bounce) using only screen info<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Sample nearby pixels within hemisphere around the normal, they act as indirect lighting source if they are bright, and use depth buffer comparison to check for occlusion<\/li>\n\n\n\n<li>Adds colour bleeding, diffuse bounce, responds to movement<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Real-time Global Illumination<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Lumen (Unreal Engine 5)<\/h3>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Hybrid GI system using screen space, signed distance fields (SDFs) and surface caching<\/mark><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Screen-space tracing<\/strong>\n<ul class=\"wp-block-list\">\n<li>First bounce is attempted using visible geometry (similar to SSR or SSGI)<\/li>\n\n\n\n<li>If bounce point is visible on screen, fast and accurate<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Distance field tracking<\/strong>\n<ul class=\"wp-block-list\">\n<li>When screen-space fails, fall back to tracing rays through distance fields: simplified 3D volumes representing scene geometry<\/li>\n\n\n\n<li>Captures off-screen and large scale bounce<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Surface caching<\/strong>\n<ul class=\"wp-block-list\">\n<li>Store cached representation of scene&#8217;s lighting info, sampled from multiple rays over time<\/li>\n\n\n\n<li>Used for lighting surfaces not directly visible on screen<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Temporal accumulation<\/strong>\n<ul class=\"wp-block-list\">\n<li>Denoise and stabilise lighting over multiple frames<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Why is it relevant?<\/strong>\n<ul class=\"wp-block-list\">\n<li>No baking required<\/li>\n\n\n\n<li>Supports moving lights<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">DDGI (Dynamic Diffuse Global Illumination)<\/h3>\n\n\n\n<p><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\">TODO<\/mark><\/strong><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Selective Raytracing Passes<\/h1>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Selected ray tracing passes (for shadows, reflections, indirect lighting) (often leveraging RTX)<\/mark><\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">RTX<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>NVIDIA\u2019s real-time ray tracing technology\n<ul class=\"wp-block-list\">\n<li><strong>Hardware<\/strong>: NVIDIA RTX GPUs<\/li>\n\n\n\n<li><strong>Software<\/strong>: NVIDIA RTX, DLSS, Reflex<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>RT cores: dedicated hardware to accelerate ray traversal and BVH intersection test<\/li>\n<\/ul>\n\n\n\n<p><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\">Look more into this, RTXGI and shit<\/mark><\/strong><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Level of Detail and Asset Complexity<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">LOD Systems<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Dynamically swap mesh or material versions based on distance to the camera<\/mark><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Create multiple versions of a mesh (LOD0, LOD1, LOD2, etc.) with progressively fewer polygons<\/li>\n\n\n\n<li>Engine switches at runtime based on distance from camera, screen size of the object and performance budget<\/li>\n\n\n\n<li>Can also apply to materials, textures, rigs, &#8230;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Culling<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Skip rendering objects that are outside of X<\/mark><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Frustum Culling<\/strong>: skip objects outside the camera view<\/li>\n\n\n\n<li><strong>Occlusion Culling<\/strong>: skip objects behind other objects<\/li>\n\n\n\n<li><strong>Distance Culling<\/strong>: skip objects beyond a certain distance<\/li>\n\n\n\n<li><strong>Portal Culling<\/strong>: used in interiors with rooms\/zones<\/li>\n<\/ul>\n\n\n\n<h1 class=\"wp-block-heading\">Textures and Materials<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Material optimisation<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Shader LOD<\/strong>: simpler shader versions for distant objects<\/li>\n\n\n\n<li><strong>Material instancing<\/strong>: reuse same shader with different parameters<\/li>\n\n\n\n<li><strong>Packed textures<\/strong>: store several different non-color maps (roughness, metallic, AO) into the R, G, B and A channels of a single image -> sample one RGBA texture instead of 3 separate single channel ones -> fewer texture lookups and lower memory usage<\/li>\n\n\n\n<li><strong>Mipmapping<\/strong>: user lower-res texture versions for distant surfaces<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Fake Subsurface Scattering<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Translucency map<\/strong>: use a mask that brightens thin areas, multiply by back lighting<\/li>\n\n\n\n<li><strong>Screen-space blur<\/strong>: blur lighting or albedo buffer slightly -> get that soft diffuse look<\/li>\n\n\n\n<li><strong>Pre-integrated skin shading (UE4)<\/strong>: use lookup tables of precomputed subsurface scattering wrt view\/light direction -> pre-integrate the math into a 2D texture and sample it<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Fake volumetrics<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Volumetric fog<\/li>\n\n\n\n<li>Particle-based volume<\/li>\n\n\n\n<li>Fake light shafts (god rays)<\/li>\n\n\n\n<li>Volumetric ray marching<\/li>\n<\/ul>\n\n\n\n<p><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\">TODO<\/mark><\/strong><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Temporal and Spatial Fidelity<\/h1>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Spatial fidelity<\/strong>: how much detail you show in a single frame<\/li>\n\n\n\n<li><strong>Temporal fidelity<\/strong>: how stable and smooth the image looks over time<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Temporal Anti-Aliasing (TAA)<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Smooth data over time<\/mark><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Main idea<\/strong>\n<ol class=\"wp-block-list\">\n<li>Render the current frame with <strong>jittered sampling<\/strong>\n<ul class=\"wp-block-list\">\n<li><strong>Jittered sampling<\/strong>: slightly offsetting the camera\u2019s projection matrix each frame by a tiny sub-pixel amount<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Compare with reprojected data from the previous frame (based on <strong>motion vectors<\/strong>)\n<ul class=\"wp-block-list\">\n<li><strong>Motion vector<\/strong>: describes how much each pixel or object has moved between previous and current frames, 2D vector per pixel (horizontal\/vertical movement)<\/li>\n\n\n\n<li>Use motion vector to map data (colour buffer, lighting data, &#8230;) from previous frame to data from current one<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Blend the two together<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Cons<\/strong>\n<ul class=\"wp-block-list\">\n<li>Can cause <strong>ghosting<\/strong>: outdated visual data from a previous frame incorrectly blended into the current frame<\/li>\n\n\n\n<li>Blur thin features and motion<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Deep Learning Super Sampling (DLSS)<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Render at a lower resolution then upscale to high-res with a neural network<\/mark><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Proprietary NVIDIA model<\/li>\n\n\n\n<li>Takes as additional inputs motion vectors, depth and jitter<\/li>\n\n\n\n<li><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\">TODO<\/mark><\/strong><\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">FidelityFX Super Resolution (FSR)<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">AMD&#8217;s open upscaling solution: TAA + motion vector based upscaling<\/mark><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Works on any GPU<\/li>\n\n\n\n<li>Lower image quality than DLSS<\/li>\n\n\n\n<li>More prone to ghosting and blurring<\/li>\n\n\n\n<li><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-luminous-vivid-orange-color\">TODO<\/mark><\/strong><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Techniques used in real-time rendering to simulate accuracy of offline rendering. Lighting and Global Illumination Baked Lighting Precompute lighting data for static objects Lightmaps 2D texture of light intensity and colour for surface of a static mesh, accounts for direct and indirect lighting Ambient Occlusion maps Shadow maps Static geometry shadows baked into lightmaps\/textures Light [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"ub_ctt_via":"","site-container-style":"default","site-container-layout":"default","site-sidebar-layout":"default","disable-article-header":"default","disable-site-header":"default","disable-site-footer":"default","disable-content-area-spacing":"default","footnotes":""},"class_list":["post-1233","page","type-page","status-publish","hentry"],"featured_image_src":null,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/pages\/1233","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/comments?post=1233"}],"version-history":[{"count":5,"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/pages\/1233\/revisions"}],"predecessor-version":[{"id":1239,"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/pages\/1233\/revisions\/1239"}],"wp:attachment":[{"href":"https:\/\/cammonte.com\/index.php\/wp-json\/wp\/v2\/media?parent=1233"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}