Depth-of-field rendering with multiview synthesis |
We present a GPU-based real-time rendering method that simulates high-quality depth-of-field effects, similar in quality to multiview-accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of a scene, but can approximate different views by relying on a layered image-based scene representation. We present several performance and quality improvements, such as early culling, approximate cone tracing, and jittered sampling. Our method achieves artifact-free results for complex scenes and reasonable depth-of-field blur in real time.
Images and movies
BibTex references
@Article { LES09a, author = "Lee, Sungkil and Eisemann, Elmar and Seidel, Hans-Peter", title = "Depth-of-field rendering with multiview synthesis", journal = "ACM Trans. Graph. (Proc. of SIGGRAPH Asia)", number = "5", volume = "28", year = "2009", url = "http://graphics.tudelft.nl/Publications-new/2009/LES09a" }