Category Archives: Colloquia

CG Colloquium Thursday February 7th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, February 7, 2019, 15:45-17:45h, at EWI-Lecture hall Chip.

The program features the following two speakers:

Haoming Yeh
Title: Projective Dynamics: Fusing Constraint Projections for Fast Simulation
Abstract: We present a new method for implicit time integration of physical systems. Our approach builds a bridge between nodal Finite Element methods and Position Based Dynamics, leading to a simple, efficient, robust, yet accurate solver that supports many different types of constraints. We propose specially designed energy potentials that can be solved efficiently using an alternating optimization approach. Inspired by continuum mechanics, we derive a set of continuum based potentials that can be efficiently incorporated within our solver. We demonstrate the generality and robustness of our approach in many different applications ranging from the simulation of solids,
cloths, and shells, to example-based simulation. Comparisons to Newton-based and Position Based Dynamics solvers highlight the benefits of our formulation.

Matthijs Amesz
Title: Inverse Diffusion Curves using Shape Optimization
Abstract: The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a
nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry.
In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates
the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in

CG Colloquium Thursday January 10th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, January 10,
2019, 15:45-17:45h, at Pulse-Hall 4.

The program
features the following two speakers:

Anmol Hanagodimath


Optimizing BRDF Orientations for the Manipulation of Anisotropic Highlights


This paper introduces a system for the direct editing of highlights produced by anisotropic BRDFs, which we call anisotropic highlights. We first provide a comprehensive analysis of the link between the direction of anisotropy and the shape of highlight curves for arbitrary object surfaces. The gained insights provide the required ingredients to infer BRDF orientations from a prescribed highlight tangent field. This amounts to a non-linear optimization problem, which is solved at interactive framerates during manipulation. Taking inspiration from sculpting software, we provide tools that give the impression of manipulating highlight curves while actually modifying their tangents. Our solver produces desired highlight shapes for a host of lighting environments and anisotropic BRDFs

Shang Xiang


The Heat Method for Distance Computation


We introduce the heat method for solving the single- or multiple-source shortest path problem on both flat and curved domains. A key insight is that distance computation can be split into two stages: first find the direction along which distance is increasing, then compute the distance itself. The heat method is robust, efficient, and simple to implement since it is based on solving a pair of standard sparse linear systems. These systems can be factored once and subsequently solved in near-linear time, substantially reducing amortized cost. Real-world performance is an order of magnitude faster than state-of-the-art methods, while maintaining a comparable level of accuracy. The method can be applied in any dimension, and on any domain that admits a gradient and inner product—including regular grids, triangle meshes, and point clouds. Numerical evidence indicates that the method converges to the exact distance in the limit of refinement; we also explore smoothed approximations of distance suitable for applications where greater regularity is desired.

CG Colloquium Thursday December 13th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, December 13th, 2018, 15:45-17:45h, at Pulse-Hall 2.

The program features the following two speakers:

Marie Kegeleers

Title: Soccer on Your Tabletop

Abstract: We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage.

Levi van Aanholt

Title: Interactive Sketching of Urban Procedural Models 

Abstract: 3D modeling remains a notoriously difficult task for novices de- spite significant research effort to provide intuitive and automated systems. We tackle this problem by combining the strengths of two popular domains: sketch-based modeling and procedural modeling. On the one hand, sketch-based modeling exploits our ability to draw but requires detailed, unambiguous drawings to achieve complex models. On the other hand, procedural modeling automates the creation of precise and detailed geometry but requires the tedious definition and parameterization of procedural models. Our system uses a collection of simple procedural grammars, called snippets, as building blocks to turn sketches into realistic 3D models. We use a machine learning approach to solve the inverse problem of finding the procedural model that best explains a user sketch. We use non-photorealistic rendering to generate artificial data for training convolutional neural networks capable of quickly recognizing the procedural rule intended by a sketch and estimating its parameters. We integrate our algorithm in a coarse-to-fine urban modeling system that allows users to create rich buildings by successively sketching the building mass, roof, facades, windows, and ornaments. A user study shows that by using our approach non-expert users can generate complex buildings in just a few minutes.

CG Colloquium Thursday November 29th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 29th, 2018, 15:45-17:45h, at Pulse-Hall 2.

The program features the following two speakers:

Jasper van Esveld
CoreCavity: Interactive Shell Decomposition for Fabrication with
Molding is a popular mass production method, in which the initial expenses for the mold are offset by the low per-unit production cost. However, the physical fabrication constraints of the molding technique commonly restrict the shape of moldable objects. For a complex shape, a decomposition of the object into moldable parts is a common strategy to address these constraints, with plastic model kits being a popular and illustrative example. However, conducting such a decomposition requires considerable expertise, and it depends on the technical aspects of the fabrication technique, as well as aesthetic considerations. We present an interactive technique to create such decompositions for two-piece molding, in which each part of the object is cast between two rigid mold pieces. Given the surface description of an object, we decompose its thin-shell equivalent into moldable parts by first performing a coarse decomposition and then utilizing an active contour model for the boundaries between individual parts. Formulated as an optimization problem, the movement of the contours is guided by an energy reflecting fabrication constraints to ensure the moldability of each part. Simultaneously the user is provided with editing capabilities to enforce aesthetic guidelines. Our interactive interface provides control of the contour positions by allowing, for example, the alignment of part boundaries with object features. Our technique enables a novel workflow, as it empowers novice users to explore the design space, and it generates fabrication-ready two-piece molds that can be used either for casting or industrial injection molding of free-form objects.

Bartosz Zablocki
High-quality streamable free-viewpoint video
We present the first end-to-end solution to create high-quality free-viewpoint video encoded as a compact data stream. Our system records performances using a dense set of RGB and IR video cameras, generates dynamic textured surfaces, and compresses these to a streamable 3D video format. Four technical advances contribute to high fidelity and robustness: multimodal multi-view stereo fusing RGB, IR, and silhouette information; adaptive meshing guided by automatic detection of perceptually salient areas; mesh tracking to create temporally coherent subsequences; and encoding of tracked textured meshes as an MPEG video stream. Quantitative experiments demonstrate geometric accuracy, texture fidelity, and encoding efficiency. We release several datasets with calibrated inputs and processed results to foster future research.

Computer Graphics and Visualization Research Seminar

You are cordially invited to attend our Computer Graphics and Visualization Research Seminar on Thursday, November 22nd, 2018, 13:00-14:00, at VMB COLLOQUIUMZAAL.

The program features an invited talk:

Speaker: Dr. Jun Wu
Assistant Professor, Department of Design Engineering, Delft University of Technology

Title: Topology Optimization of Multiscale Structures for 3D Printing

Abstract: 3D printing enables the fabrication of complex structures. In engineering the benefits of this manufacturing flexibility are probably best demonstrated in combination with the design of structures by topology optimization. In this talk I will present our research on topology optimization of multiscale structures. The first approach, inspired by trabecular bone, generates porous bone-like infill structures for 3D printing. This approach is extended to allow simultaneous optimization of shape boundaries and the interior lattice. To reduce its computational complexity, a homogenization-based approach is developed to project orthotropic lattice structures. In the second approach, we develop a continuous formulation to the discrete problem of quadtree subdivision. By restricting the subdivision level, geometric features such as maximum and minimum structure sizes can be controlled, for manufacturing benefits.

Images of optimized structures and fabrication results can be found on this page.

CG Colloquium Thursday November 15th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 15th, 2018, 15:45-17:45h, at Pulse-Hall 2.

The program features the following two speakers:

Huinan Jiang
A Chebyshev Semi-Iterative Approach for Accelerating Projective and Position-based Dynamics
In this paper, we study the use of the Chebyshev semi-iterative approach in projective and position-based dynamics. Although projective dynamics is fundamentally nonlinear, its convergence behavior is similar to that of an iterative method solving a linear system. Because of that, we can estimate the “spectral radius” and use it in the Chebyshev approach to accelerate the convergence by at least one order of magnitude, when the global step is handled by the direct solver, the Jacobi solver, or even the Gauss-Seidel solver. Our experiment shows that the combination of the Chebyshev approach and the direct solver runs fastest on CPU, while the combination of the Chebyshev approach and the Jacobi solver outperforms any other combination on GPU, as it is highly compatible with parallel computing. Our experiment further shows position-based dynamics can be accelerated by the Chebyshev approach as well, although the effect is less obvious for tetrahedral meshes. The whole approach is simple, fast, effective, GPU-friendly, and has a small memory cost.

Jesse Tilro
Phase-Functioned Neural Networks for Character Control
We present a real-time character control mechanism using a novel neural network architecture called a Phase-Functioned Neural Network. In this network structure, the weights are computed via a cyclic function which uses the phase as an input. Along with the phase, our system takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high quality motions that achieve the desired user control. The entire network is trained in an end-to-end fashion on a large dataset composed of locomotion such as walking, running, jumping, and climbing movements fitted into virtual environments. Our system can therefore automatically produce motions where the character adapts to different geometric environments such as walking and running over rough terrain, climbing over large rocks, jumping over obstacles, and crouching under low ceilings. Our network architecture produces higher quality results than time-series autoregressive models such as LSTMs as it deals explicitly with the latent variable of motion relating to the phase. Once trained, our system is also extremely fast and compact, requiring only milliseconds of execution time and a few megabytes of memory, even when trained on gigabytes of motion data. Our work is most appropriate for controlling characters in interactive scenes such as computer games and virtual reality systems.

CG Colloquium Thursday November 1st

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 1st, 2018, 15:45-17:45h, at Pulse-Hall 4.

The program features the following two speakers:

Youri Appel
Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection
Redirected walking techniques can enhance the immersion and visual-vestibular comfort of virtual reality (VR) navigation, but are often limited by the size, shape, and content of the physical environments.
We propose a redirected walking technique that can apply to small physical environments with static or dynamic obstacles. Via a head- and eye-tracking VR headset, our method detects saccadic suppression and redirects the users during the resulting temporary blindness. Our dynamic path planning runs in real-time on a GPU, and thus can avoid static and dynamic obstacles, including walls, furniture, and other VR users sharing the same physical space. To further enhance saccadic redirection, we propose subtle gaze direction methods tailored for VR perception.
We demonstrate that saccades can significantly increase the rotation gains during redirection without introducing visual distortions or simulator sickness. This allows our method to apply to large open virtual spaces and small physical environments for room-scale VR. We evaluate our system via numerical simulations and real user studies.

Berend Baas
Guided proceduralization: Optimizing geometry processing and grammar extraction for architectural models
We describe a guided proceduralization framework that optimizes geometry processing on architectural input models to extract target grammars. We aim to provide efficient artistic workflows by creating procedural representations from existing 3D models, where the procedural expressiveness is controlled by the user. Architectural reconstruction and modeling tasks have been handled as either time consuming manual processes or procedural generation with difficult control and artistic influence. We bridge the gap between creation and generation by converting existing manually modeled architecture to procedurally editable parametrized models, and carrying the guidance to procedural domain by letting the user define the target procedural representation. Additionally, we propose various applications of such procedural representations, including guided completion of point cloud models, controllable 3D city modeling, and other benefits of procedural modeling.

CG Colloquium Thursday October 4th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, October 4, 2018, 15:45-17:45h, at Pulse-Hall 4.

The program features the following two speakers:

Mathijs Molenaar

Title: Occlusion culling in memory-coherent ray tracing


In this project I look to improve the performance of out-of-core ray/path tracing based on memory-coherent ray tracing. In memory-coherent ray tracing the acceleration structure is split into two layers, the first of which is always in memory while the subtrees in the second layer are evicted from memory when deemed necessary. Rays are batched at unloaded leaf nodes (in the top-level tree) and only when a batch is full will the leaf node be loaded from disk and intersected. My research question is whether rendering performance can be improved by keeping a low-resolution representation of each top-level leaf node in memory at all time and using it as an early-out for rays hitting a leaf’s bounding volume. This will reduce the number of disk operations at the cost of some extra computation time.

Wouter Groen

Title: Precomputed Light-Transport Networks for Volume Rendering


Rendering volumetric data including complex lighting phenomena is a difficult task.

Previous solutions, such as in Exposure Render, involve a Monte Carlo Processes that has to shoot many rays in order to approximate the light transport faithfully. In consequence, the process is costly and efficient image synthesis becomes challenging. In this project, we want to investigate the principal of path reusing by building a network of light-transport paths in a preprocess. Hereby, we avoid the costly process of establishing new branching for each ray that is traversing the volume. This talk will be an initialization talk, in which we describe the goals that we will pursue in the months to come.

Our initial plan is as follows. Given a volumetric data set, we want to precompute the result of a set of rays within this volume, which will be steered by the volume data itself by mechanisms, such as importance sampling. These rays will be connected to establish a light-transport network. Our goal is to make use of this light-transport network to accelerate the computation of an approximate light transport at run-time. When rendering, we will launch rays from the light/camera and will connect these rays to the precomputed network. Next, the involved energies coming from the rays, will simply be propagated along the paths of the network to estimate an overall light contribution. In this way, only few new intersection tests need to be performed, while many paths in the network are reused. Hereby, run-time costs are reduced drastically with, hopefully, little visual impact. There are several research questions to be answered: How to represent and store the network efficiently? How to derive it in a fast way? How to structure the computations efficiently? How to enable an unbiased result?…

CG Colloquium Thursday September 20th

You are cordially invited to attend our next Computer Graphics Colloquium, which will be held on:

Thursday, September 20th, 2018, 15:45-16:45h, at EWI-Lecture Room F.

The programme features a guest talk.

Speaker: Liangliang Nan, Assistant Professor, 3D Geoinformation Group, Faculty of Architecture and the Built Environment, Delft University of Technology

Title: Modeling Real-World Scenes
Abstract: Capturing the real world scenes in the 3D format has been made possible by advances in scanning and photogrammetric technologies. This has attracted increasing interests in acquiring, analyzing, and modeling real-world scenes. However, obtaining a faithful 3D representation of real-world scenes still remains an open problem. In this talk, I would like to share my experiences in the past few years in reconstructing urban scenes. In particular, I will present two algorithms for reconstructing coarse models and for enriching the coarse models with fine details respectively. In the end, we will discuss the trend and some topics for the future research.

First CG Colloquium (2018/2019) – Thursday September 6th

The objective of the colloquium/seminar is a bi-weekly meeting to provide all CGV members, staff as well as graduate students, with a forum for
communication, presentation and scientific discussion in the area of Computer Graphics and Visualization at large. Please see this file for a more detailed description.

The first CG Colloquium for academic year 2018/2019 will be held on Thursday, 6th-September. The first session will be an introduction to the seminar/colloquium for master students. Only staff and the seminar students are expected to attend.