Depth Annotations: Designing Depth of a Single Image for Depth-based Effects |
We present a novel pipeline to generate a depth map from a single image that can be used as input for a variety of artistic depth-based effects. In such a context, the depth maps do not have to be perfect but are rather designed with respect to a desired result. Consequently, our solution centers around user interaction and relies on a scribble-based depth editing. The annotations can be sparse, as the depth map is generated by a diffusion process, which is guided by image features. Additionally, we support a variety of controls, such as a non-linear depth mapping, a steering mechanism for the diffusion(e.g., directionality, emphasis, or reduction of the influence of image
cues), and besides absolute, we also support relative depth indications. In case that a depth estimate is available from an automatic solution, we illustrate how this information can be integrated in form of a depth palette, that allows the user to transfer depth values via a painting metaphor. We demonstrate a variety of artistic 3D results, including wiggle stereoscopy, artistic abstractions, haze, unsharp masking, and depth of field.
Images and movies
BibTex references
@Article { LSE17a, author = "Liao, Jingtang and Shen, Shuheng and Eisemann, Elmar", title = "Depth Annotations: Designing Depth of a Single Image for Depth-based Effects", journal = "Computers \& Graphics", year = "2017", url = "http://graphics.tudelft.nl/Publications-new/2017/LSE17a" }