You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday,
April 4, 2019, 15:45-17:45h, at Pulse Hall 7.
The program features the following three speakers:
Ruben VroegindeWeij (PRELIMINARY) Title: Motion in Image Abstract: We present a method to encode motion in a single image by mixing frames from different time origins with a simple user interface.
Mark van de Ruit Title: Pre-Estimated Spectral Rendering Abstract: Spectral Monte-Carlo rendering algorithms are suited for reproducing several advanced light phenomena such as dispersion and colored particle scattering. However, spectral rendering comes at the cost of increased (colored) image noise, as now additional samples are required in the spectral domain. We propose to iteratively build estimates of the spectral distributions in a scene during rendering, and use these estimates to guide sampling of the spectral domain. This method can lower variance from spectral sampling in specific situations, which is demonstrated with a working implementation in a conventional path tracer.
Felix Yang Title: Adaptive Multi-view Ambient Occlusion Abstract: Screen-space ambient occlusion and obscurance is a group of techniques that approximate the ambient occlusion or obscurance lighting model in the screen-space. They are ubiquitously adopted in modern video games, but suffer from view-dependent artifacts. One possible method to remedy such artifacts is using additional auxiliary cameras to aid the computation, but the improvement diminishes if the auxiliary cameras have poor coverage of the main scene. This project aims to develop techniques to adaptively manipulate the auxiliary cameras to ensure good coverages, therefore a more stable improvement over the single-view result.
You are all cordially invited to workshop visual analytics and applications, which will take place on 8th of April, from 14:00-17:00 at Pulse-hall 4. Please find detailed program below.
14:00 Jean Daniel Fekete: “Exploring the Evolution of Relationships with Dynamic Hypergraphs”
14:40 Renata Raidou: “Employing Visual Analytics for the Exploration and Prediction”
15:30 Cagatay Turkay: “Informed computational modelling through visual analytics”
the increasing availability of computational data analysis and modelling tools
that can be utilised out-of-the-box, the route from data to results is now much
shorter. However, these advancements also come together their own limitations,
and a data scientists need to be aware of the pitfalls and act carefully to
question every observation and method used within each step of the data
analysis process. Visual analytics approaches where interactive visualisations
are coupled tightly with the algorithms offer effective methodologies in
conducting data science in such inquisitive, rigorous ways. This talk will
discuss how visual analytics can facilitate such practices and will look at
examples of research on how data can be transformed and visualised creatively
in multiple perspectives, on how comparisons can be made within different
models, parameters, and within local and global solutions, and on how
interaction is an enabler for such processes.
16:10 Thomas Hollt: Cytosplore: Visual Analytics for Single-Cell Profiling of the Immune System Abstract:Recent advances in single-cell acquisition technology have led to a shift towards single-cell analysis in many fields of biology. In immunology, detailed knowledge of the cellular composition is of interest, as it can be the cause of deregulated immune responses, which cause diseases. Similarly, vaccination is based on triggering proper immune responses; however, many vaccines are ineffective or only work properly in a subset of those who are vaccinated. Identifying differences in the cellular composition of the immune system in such cases can lead to more precise treatment. Cytosplore is an integrated, interactive visual analysis framework for the exploration of large single-cell datasets. We have developed Cytosplore in close collaboration with immunology researchers and several partners use the software in their daily workflow. Cytosplore enables efficient data analysis and has led to several discoveries alongside high-impact publications.
Professor Eisemann is to be awarded the 2019 Dutch prize for ICT research (Nederlandse Prijs voor ICT-onderzoek). The well-deserved award is a recognition for Prof. Eisemann’s research into the accurate, detailed depiction of visualizations using modern graphics hardware. The prize will be officially awarded to Prof. Eisemann during the ICT Open 2019 in Hilversum. Congratulations to Prof. Eisemann!
Computer Graphics and Visualization Seminar on Thursday, March 7, 2019, 15:45-17:45h, at EWI-Lecture hall Chip.
The program features the following two speakers:
Remi van der Laan
Title: Exploiting Coherence in Time-Varying Voxel Data
Abstract: We encode time-varying voxel data for efficient storage and streaming. We store the equivalent of a separate sparse voxel octree for each frame, but utilize both spatial and temporal coherence to reduce the amount of memory needed. We represent the time-varying voxel data in a single directed acyclic graph with one root per time step. In this graph, we avoid storing identical regions by keeping one unique instance and pointing to that from several parents. We further reduce the memory consumption of the graph by minimizing the number of bits per pointer and encoding the result into a dense bitstream.
Michiel van Spaendonck
Title: Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination
Abstract: We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter. This hierarchy allows us to distinguish between noise and detail at multiple scales using local luminance variance.
Physically based light transport is a long-standing goal for real-time computer graphics. While modern games use limited forms of ray tracing, physically based Monte Carlo global illumination does not meet their 30Hz minimal performance requirement. Looking ahead to fully dynamic real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10x more temporally stable results, matches reference images 5-47% better (according to SSIM), and runs in just 10ms (+- 15%) on modern graphics hardware at 1920×1080 resolution.
You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, February 21, 2019, 15:45-17:45h, at EWI-Lecture hall Chip.
The program features the following two speakers:
Nouri Khalass Title: Visualizing Stars and Emission Nebulae Abstract: We describe the star and nebula visualization techniques used to create a 3D volumetric visualization of the Orion Nebula. The nebula’s ionization layer is modeled first as a surface model, derived from infrared and visible light observations. The model is imported into a volume scene graph-based volume visualization system to simulate the nebula’s emissive gases. Stars are rendered using Gaussian spots that are attenuated with distance.
Mark van de Ruit Title: Real-Time Polygonal-Light Shading with Linearly Transformed Cosines Abstract: In this paper, we show that applying a linear transformation—represented by a 3 x 3 matrix—to the direction vectors of a spherical distribution yields another spherical distribution, for which we derive a closed-form expression. With this idea, we can use any spherical distribution as a base shape to create a new family of spherical distributions with parametric roughness, elliptic anisotropy and skewness. If the original distribution has an analytic expression, normalization, integration over spherical polygons, and importance sampling, then these properties are inherited by the linearly transformed distributions.
By choosing a clamped cosine for the original distribution we obtain a family of distributions, which we call Linearly Transformed Cosines (LTCs), that provide a good approximation to physically based BRDFs and that can be analytically integrated over arbitrary spherical polygons. We show how to use these properties in a realtime polygonal-light shading application. Our technique is robust, fast, accurate and simple to implement.
You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, February 7, 2019, 15:45-17:45h, at EWI-Lecture hall Chip.
The program features the following two speakers:
Haoming Yeh Title: Projective Dynamics: Fusing Constraint Projections for Fast Simulation Abstract: We present a new method for implicit time integration of physical systems. Our approach builds a bridge between nodal Finite Element methods and Position Based Dynamics, leading to a simple, efficient, robust, yet accurate solver that supports many different types of constraints. We propose specially designed energy potentials that can be solved efficiently using an alternating optimization approach. Inspired by continuum mechanics, we derive a set of continuum based potentials that can be efficiently incorporated within our solver. We demonstrate the generality and robustness of our approach in many different applications ranging from the simulation of solids,
cloths, and shells, to example-based simulation. Comparisons to Newton-based and Position Based Dynamics solvers highlight the benefits of our formulation.
Matthijs Amesz Title: Inverse Diffusion Curves using Shape Optimization Abstract: The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a
nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry.
In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates
the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in
You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, January 10,
2019, 15:45-17:45h, at Pulse-Hall 4.
features the following two speakers:
Optimizing BRDF Orientations for the Manipulation of Anisotropic Highlights
This paper introduces a system for the direct editing of highlights produced by anisotropic BRDFs, which we call anisotropic highlights. We first provide a comprehensive analysis of the link between the direction of anisotropy and the shape of highlight curves for arbitrary object surfaces. The gained insights provide the required ingredients to infer BRDF orientations from a prescribed highlight tangent field. This amounts to a non-linear optimization problem, which is solved at interactive framerates during manipulation. Taking inspiration from sculpting software, we provide tools that give the impression of manipulating highlight curves while actually modifying their tangents. Our solver produces desired highlight shapes for a host of lighting environments and anisotropic BRDFs
The Heat Method for Distance Computation
We introduce the heat method for solving the single- or multiple-source shortest path problem on both flat and curved domains. A key insight is that distance computation can be split into two stages: first find the direction along which distance is increasing, then compute the distance itself. The heat method is robust, efficient, and simple to implement since it is based on solving a pair of standard sparse linear systems. These systems can be factored once and subsequently solved in near-linear time, substantially reducing amortized cost. Real-world performance is an order of magnitude faster than state-of-the-art methods, while maintaining a comparable level of accuracy. The method can be applied in any dimension, and on any domain that admits a gradient and inner product—including regular grids, triangle meshes, and point clouds. Numerical evidence indicates that the method converges to the exact distance in the limit of refinement; we also explore smoothed approximations of distance suitable for applications where greater regularity is desired.
You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, December 13th, 2018, 15:45-17:45h, at Pulse-Hall 2.
The program features the following two speakers:
Title: Soccer on Your Tabletop
Abstract: We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage.
Levi van Aanholt
Title: Interactive Sketching of Urban Procedural Models
Abstract: 3D modeling remains a notoriously difficult task for novices de- spite significant research effort to provide intuitive and automated systems. We tackle this problem by combining the strengths of two popular domains: sketch-based modeling and procedural modeling. On the one hand, sketch-based modeling exploits our ability to draw but requires detailed, unambiguous drawings to achieve complex models. On the other hand, procedural modeling automates the creation of precise and detailed geometry but requires the tedious definition and parameterization of procedural models. Our system uses a collection of simple procedural grammars, called snippets, as building blocks to turn sketches into realistic 3D models. We use a machine learning approach to solve the inverse problem of finding the procedural model that best explains a user sketch. We use non-photorealistic rendering to generate artificial data for training convolutional neural networks capable of quickly recognizing the procedural rule intended by a sketch and estimating its parameters. We integrate our algorithm in a coarse-to-fine urban modeling system that allows users to create rich buildings by successively sketching the building mass, roof, facades, windows, and ornaments. A user study shows that by using our approach non-expert users can generate complex buildings in just a few minutes.
You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 29th, 2018, 15:45-17:45h, at Pulse-Hall 2.
The program features the following two speakers:
Jasper van Esveld Title
CoreCavity: Interactive Shell Decomposition for Fabrication with Abstract
Molding is a popular mass production method, in which the initial expenses for the mold are offset by the low per-unit production cost. However, the physical fabrication constraints of the molding technique commonly restrict the shape of moldable objects. For a complex shape, a decomposition of the object into moldable parts is a common strategy to address these constraints, with plastic model kits being a popular and illustrative example. However, conducting such a decomposition requires considerable expertise, and it depends on the technical aspects of the fabrication technique, as well as aesthetic considerations. We present an interactive technique to create such decompositions for two-piece molding, in which each part of the object is cast between two rigid mold pieces. Given the surface description of an object, we decompose its thin-shell equivalent into moldable parts by first performing a coarse decomposition and then utilizing an active contour model for the boundaries between individual parts. Formulated as an optimization problem, the movement of the contours is guided by an energy reflecting fabrication constraints to ensure the moldability of each part. Simultaneously the user is provided with editing capabilities to enforce aesthetic guidelines. Our interactive interface provides control of the contour positions by allowing, for example, the alignment of part boundaries with object features. Our technique enables a novel workflow, as it empowers novice users to explore the design space, and it generates fabrication-ready two-piece molds that can be used either for casting or industrial injection molding of free-form objects.
Bartosz Zablocki Title
High-quality streamable free-viewpoint video Abstract
We present the first end-to-end solution to create high-quality free-viewpoint video encoded as a compact data stream. Our system records performances using a dense set of RGB and IR video cameras, generates dynamic textured surfaces, and compresses these to a streamable 3D video format. Four technical advances contribute to high fidelity and robustness: multimodal multi-view stereo fusing RGB, IR, and silhouette information; adaptive meshing guided by automatic detection of perceptually salient areas; mesh tracking to create temporally coherent subsequences; and encoding of tracked textured meshes as an MPEG video stream. Quantitative experiments demonstrate geometric accuracy, texture fidelity, and encoding efficiency. We release several datasets with calibrated inputs and processed results to foster future research.
You are cordially invited to attend the MSc thesis defence of Niels van der Veen. This defense will take place on Friday the 23rd of November, at 12:00h in room LB01.010, building 36.
The presentation is open to the public and will last around 45 minutes including questions from the audience. You are hereby cordially invited to join!
A paint-based approach for optimal lighting design in real scenes
Lighting design is a computational expensive task, commonly done using Computer-aided design (CAD) software in a virtual scene. The designer places and tunes the virtual light sources and, yet the virtual environment is ideal for physically correct light tracing, the costly simulation might not provide the desired impression in the real scene. Moreover, the chosen light placement is not necessarily optimized. In this work we capture light behavior from real scenes as well as 3D scene properties and use this information to recreate different lighting designs. The results approximates physical correctness while visualizing the illumination on the real scene in a more time-efficient way. To make the design process more intuitive, the user paints the desired light properties instead of placing the light sources. The system attempts to find valid positions and parameters of the light sources in the scene to reflect the current design. Constraints such as number of light sources and emission profiles as well as spatial constraints can also be specified.