You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, October 4, 2018, 15:45-17:45h, at Pulse-Hall 4.
The program features the following two speakers:
Title: Occlusion culling in memory-coherent ray tracing
In this project I look to improve the performance of out-of-core ray/path tracing based on memory-coherent ray tracing. In memory-coherent ray tracing the acceleration structure is split into two layers, the first of which is always in memory while the subtrees in the second layer are evicted from memory when deemed necessary. Rays are batched at unloaded leaf nodes (in the top-level tree) and only when a batch is full will the leaf node be loaded from disk and intersected. My research question is whether rendering performance can be improved by keeping a low-resolution representation of each top-level leaf node in memory at all time and using it as an early-out for rays hitting a leaf’s bounding volume. This will reduce the number of disk operations at the cost of some extra computation time.
Title: Precomputed Light-Transport Networks for Volume Rendering
Rendering volumetric data including complex lighting phenomena is a difficult task.
Previous solutions, such as in Exposure Render, involve a Monte Carlo Processes that has to shoot many rays in order to approximate the light transport faithfully. In consequence, the process is costly and efficient image synthesis becomes challenging. In this project, we want to investigate the principal of path reusing by building a network of light-transport paths in a preprocess. Hereby, we avoid the costly process of establishing new branching for each ray that is traversing the volume. This talk will be an initialization talk, in which we describe the goals that we will pursue in the months to come.
Our initial plan is as follows. Given a volumetric data set, we want to precompute the result of a set of rays within this volume, which will be steered by the volume data itself by mechanisms, such as importance sampling. These rays will be connected to establish a light-transport network. Our goal is to make use of this light-transport network to accelerate the computation of an approximate light transport at run-time. When rendering, we will launch rays from the light/camera and will connect these rays to the precomputed network. Next, the involved energies coming from the rays, will simply be propagated along the paths of the network to estimate an overall light contribution. In this way, only few new intersection tests need to be performed, while many paths in the network are reused. Hereby, run-time costs are reduced drastically with, hopefully, little visual impact. There are several research questions to be answered: How to represent and store the network efficiently? How to derive it in a fast way? How to structure the computations efficiently? How to enable an unbiased result?…