Category Archives: Colloquia

CG Colloquium Thursday December 13th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, December 13th, 2018, 15:45-17:45h, at Pulse-Hall 2.

The program features the following two speakers:

Marie Kegeleers

Title: Soccer on Your Tabletop

Abstract: We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage.

Levi van Aanholt

Title: Interactive Sketching of Urban Procedural Models 

Abstract: 3D modeling remains a notoriously difficult task for novices de- spite significant research effort to provide intuitive and automated systems. We tackle this problem by combining the strengths of two popular domains: sketch-based modeling and procedural modeling. On the one hand, sketch-based modeling exploits our ability to draw but requires detailed, unambiguous drawings to achieve complex models. On the other hand, procedural modeling automates the creation of precise and detailed geometry but requires the tedious definition and parameterization of procedural models. Our system uses a collection of simple procedural grammars, called snippets, as building blocks to turn sketches into realistic 3D models. We use a machine learning approach to solve the inverse problem of finding the procedural model that best explains a user sketch. We use non-photorealistic rendering to generate artificial data for training convolutional neural networks capable of quickly recognizing the procedural rule intended by a sketch and estimating its parameters. We integrate our algorithm in a coarse-to-fine urban modeling system that allows users to create rich buildings by successively sketching the building mass, roof, facades, windows, and ornaments. A user study shows that by using our approach non-expert users can generate complex buildings in just a few minutes.

CG Colloquium Thursday November 29th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 29th, 2018, 15:45-17:45h, at Pulse-Hall 2.

The program features the following two speakers:

Jasper van Esveld
Title
CoreCavity: Interactive Shell Decomposition for Fabrication with
Abstract
Molding is a popular mass production method, in which the initial expenses for the mold are offset by the low per-unit production cost. However, the physical fabrication constraints of the molding technique commonly restrict the shape of moldable objects. For a complex shape, a decomposition of the object into moldable parts is a common strategy to address these constraints, with plastic model kits being a popular and illustrative example. However, conducting such a decomposition requires considerable expertise, and it depends on the technical aspects of the fabrication technique, as well as aesthetic considerations. We present an interactive technique to create such decompositions for two-piece molding, in which each part of the object is cast between two rigid mold pieces. Given the surface description of an object, we decompose its thin-shell equivalent into moldable parts by first performing a coarse decomposition and then utilizing an active contour model for the boundaries between individual parts. Formulated as an optimization problem, the movement of the contours is guided by an energy reflecting fabrication constraints to ensure the moldability of each part. Simultaneously the user is provided with editing capabilities to enforce aesthetic guidelines. Our interactive interface provides control of the contour positions by allowing, for example, the alignment of part boundaries with object features. Our technique enables a novel workflow, as it empowers novice users to explore the design space, and it generates fabrication-ready two-piece molds that can be used either for casting or industrial injection molding of free-form objects.

Bartosz Zablocki
Title
High-quality streamable free-viewpoint video
Abstract
We present the first end-to-end solution to create high-quality free-viewpoint video encoded as a compact data stream. Our system records performances using a dense set of RGB and IR video cameras, generates dynamic textured surfaces, and compresses these to a streamable 3D video format. Four technical advances contribute to high fidelity and robustness: multimodal multi-view stereo fusing RGB, IR, and silhouette information; adaptive meshing guided by automatic detection of perceptually salient areas; mesh tracking to create temporally coherent subsequences; and encoding of tracked textured meshes as an MPEG video stream. Quantitative experiments demonstrate geometric accuracy, texture fidelity, and encoding efficiency. We release several datasets with calibrated inputs and processed results to foster future research.

Computer Graphics and Visualization Research Seminar

You are cordially invited to attend our Computer Graphics and Visualization Research Seminar on Thursday, November 22nd, 2018, 13:00-14:00, at VMB COLLOQUIUMZAAL.

The program features an invited talk:

Speaker: Dr. Jun Wu
Assistant Professor, Department of Design Engineering, Delft University of Technology

Title: Topology Optimization of Multiscale Structures for 3D Printing

Abstract: 3D printing enables the fabrication of complex structures. In engineering the benefits of this manufacturing flexibility are probably best demonstrated in combination with the design of structures by topology optimization. In this talk I will present our research on topology optimization of multiscale structures. The first approach, inspired by trabecular bone, generates porous bone-like infill structures for 3D printing. This approach is extended to allow simultaneous optimization of shape boundaries and the interior lattice. To reduce its computational complexity, a homogenization-based approach is developed to project orthotropic lattice structures. In the second approach, we develop a continuous formulation to the discrete problem of quadtree subdivision. By restricting the subdivision level, geometric features such as maximum and minimum structure sizes can be controlled, for manufacturing benefits.

Images of optimized structures and fabrication results can be found on this page.

CG Colloquium Thursday November 15th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 15th, 2018, 15:45-17:45h, at Pulse-Hall 2.

The program features the following two speakers:


Huinan Jiang
Title
A Chebyshev Semi-Iterative Approach for Accelerating Projective and Position-based Dynamics
Abstract
In this paper, we study the use of the Chebyshev semi-iterative approach in projective and position-based dynamics. Although projective dynamics is fundamentally nonlinear, its convergence behavior is similar to that of an iterative method solving a linear system. Because of that, we can estimate the “spectral radius” and use it in the Chebyshev approach to accelerate the convergence by at least one order of magnitude, when the global step is handled by the direct solver, the Jacobi solver, or even the Gauss-Seidel solver. Our experiment shows that the combination of the Chebyshev approach and the direct solver runs fastest on CPU, while the combination of the Chebyshev approach and the Jacobi solver outperforms any other combination on GPU, as it is highly compatible with parallel computing. Our experiment further shows position-based dynamics can be accelerated by the Chebyshev approach as well, although the effect is less obvious for tetrahedral meshes. The whole approach is simple, fast, effective, GPU-friendly, and has a small memory cost.


Jesse Tilro
Title
Phase-Functioned Neural Networks for Character Control
Abstract
We present a real-time character control mechanism using a novel neural network architecture called a Phase-Functioned Neural Network. In this network structure, the weights are computed via a cyclic function which uses the phase as an input. Along with the phase, our system takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high quality motions that achieve the desired user control. The entire network is trained in an end-to-end fashion on a large dataset composed of locomotion such as walking, running, jumping, and climbing movements fitted into virtual environments. Our system can therefore automatically produce motions where the character adapts to different geometric environments such as walking and running over rough terrain, climbing over large rocks, jumping over obstacles, and crouching under low ceilings. Our network architecture produces higher quality results than time-series autoregressive models such as LSTMs as it deals explicitly with the latent variable of motion relating to the phase. Once trained, our system is also extremely fast and compact, requiring only milliseconds of execution time and a few megabytes of memory, even when trained on gigabytes of motion data. Our work is most appropriate for controlling characters in interactive scenes such as computer games and virtual reality systems.

CG Colloquium Thursday November 1st

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, November 1st, 2018, 15:45-17:45h, at Pulse-Hall 4.

The program features the following two speakers:

Youri Appel
Title
Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection
Abstract
Redirected walking techniques can enhance the immersion and visual-vestibular comfort of virtual reality (VR) navigation, but are often limited by the size, shape, and content of the physical environments.
We propose a redirected walking technique that can apply to small physical environments with static or dynamic obstacles. Via a head- and eye-tracking VR headset, our method detects saccadic suppression and redirects the users during the resulting temporary blindness. Our dynamic path planning runs in real-time on a GPU, and thus can avoid static and dynamic obstacles, including walls, furniture, and other VR users sharing the same physical space. To further enhance saccadic redirection, we propose subtle gaze direction methods tailored for VR perception.
We demonstrate that saccades can significantly increase the rotation gains during redirection without introducing visual distortions or simulator sickness. This allows our method to apply to large open virtual spaces and small physical environments for room-scale VR. We evaluate our system via numerical simulations and real user studies.

Berend Baas
Title
Guided proceduralization: Optimizing geometry processing and grammar extraction for architectural models
Abstract
We describe a guided proceduralization framework that optimizes geometry processing on architectural input models to extract target grammars. We aim to provide efficient artistic workflows by creating procedural representations from existing 3D models, where the procedural expressiveness is controlled by the user. Architectural reconstruction and modeling tasks have been handled as either time consuming manual processes or procedural generation with difficult control and artistic influence. We bridge the gap between creation and generation by converting existing manually modeled architecture to procedurally editable parametrized models, and carrying the guidance to procedural domain by letting the user define the target procedural representation. Additionally, we propose various applications of such procedural representations, including guided completion of point cloud models, controllable 3D city modeling, and other benefits of procedural modeling.

CG Colloquium Thursday October 4th

You are cordially invited to attend our Computer Graphics and Visualization Seminar on Thursday, October 4, 2018, 15:45-17:45h, at Pulse-Hall 4.

The program features the following two speakers:

Mathijs Molenaar

Title: Occlusion culling in memory-coherent ray tracing

Abstract

In this project I look to improve the performance of out-of-core ray/path tracing based on memory-coherent ray tracing. In memory-coherent ray tracing the acceleration structure is split into two layers, the first of which is always in memory while the subtrees in the second layer are evicted from memory when deemed necessary. Rays are batched at unloaded leaf nodes (in the top-level tree) and only when a batch is full will the leaf node be loaded from disk and intersected. My research question is whether rendering performance can be improved by keeping a low-resolution representation of each top-level leaf node in memory at all time and using it as an early-out for rays hitting a leaf’s bounding volume. This will reduce the number of disk operations at the cost of some extra computation time.

Wouter Groen

Title: Precomputed Light-Transport Networks for Volume Rendering

Abstract

Rendering volumetric data including complex lighting phenomena is a difficult task.

Previous solutions, such as in Exposure Render, involve a Monte Carlo Processes that has to shoot many rays in order to approximate the light transport faithfully. In consequence, the process is costly and efficient image synthesis becomes challenging. In this project, we want to investigate the principal of path reusing by building a network of light-transport paths in a preprocess. Hereby, we avoid the costly process of establishing new branching for each ray that is traversing the volume. This talk will be an initialization talk, in which we describe the goals that we will pursue in the months to come.

Our initial plan is as follows. Given a volumetric data set, we want to precompute the result of a set of rays within this volume, which will be steered by the volume data itself by mechanisms, such as importance sampling. These rays will be connected to establish a light-transport network. Our goal is to make use of this light-transport network to accelerate the computation of an approximate light transport at run-time. When rendering, we will launch rays from the light/camera and will connect these rays to the precomputed network. Next, the involved energies coming from the rays, will simply be propagated along the paths of the network to estimate an overall light contribution. In this way, only few new intersection tests need to be performed, while many paths in the network are reused. Hereby, run-time costs are reduced drastically with, hopefully, little visual impact. There are several research questions to be answered: How to represent and store the network efficiently? How to derive it in a fast way? How to structure the computations efficiently? How to enable an unbiased result?…

CG Colloquium Thursday September 20th

You are cordially invited to attend our next Computer Graphics Colloquium, which will be held on:

Thursday, September 20th, 2018, 15:45-16:45h, at EWI-Lecture Room F.

The programme features a guest talk.

Speaker: Liangliang Nan, Assistant Professor, 3D Geoinformation Group, Faculty of Architecture and the Built Environment, Delft University of Technology
https://3d.bk.tudelft.nl/liangliang/

Title: Modeling Real-World Scenes
Abstract: Capturing the real world scenes in the 3D format has been made possible by advances in scanning and photogrammetric technologies. This has attracted increasing interests in acquiring, analyzing, and modeling real-world scenes. However, obtaining a faithful 3D representation of real-world scenes still remains an open problem. In this talk, I would like to share my experiences in the past few years in reconstructing urban scenes. In particular, I will present two algorithms for reconstructing coarse models and for enriching the coarse models with fine details respectively. In the end, we will discuss the trend and some topics for the future research.

First CG Colloquium (2018/2019) – Thursday September 6th

The objective of the colloquium/seminar is a bi-weekly meeting to provide all CGV members, staff as well as graduate students, with a forum for
communication, presentation and scientific discussion in the area of Computer Graphics and Visualization at large. Please see this file for a more detailed description.

The first CG Colloquium for academic year 2018/2019 will be held on Thursday, 6th-September. The first session will be an introduction to the seminar/colloquium for master students. Only staff and the seminar students are expected to attend.

 

CGV Colloquium – Friday June 8th

You are cordially invited to attend our next Computer Graphics and Visualization (CGV) Seminar/ Colloquium, which will be held on:

Friday, June 8th, 2018, 15:45-16:45h, at EWI-Lecture Hall Pi.

The programme features a MSc graduation project midterm presentations.

Presenter 1: Anshul Khandelwal

Title: Reservoir Characterization using a Geometric Approach

Abstract: The project is aimed at calculating the storage capacities of water reservoirs built after 2000 using remote sensing data of the surrounding landscapes. The motivation of the project is to improve the anthropogenic impacts on Global Hydrological Models (GHMs) used to predict water availability globally. Evaluation of the model will be done using the Shuttle RADAR Topography Mission (SRTM) data collected by NASA in 2000.

CGV Colloquium – Friday May 25th

You are cordially invited to attend our next Computer Graphics and Visualization (CGV) Seminar/ Colloquium, which will be held on:

Friday, May 25th, 2018, 15:45-17:45h, at EWI-Lecture Hall Pi.

The programme features one MSc graduation project midterm presentations and a presentation of a research project by one of our PhD students.

Presenter 1: Yunchao Yin

Title: Annotation of cerebral angiography,

Abstract: Cerebral angiography is a medical imaging technique that used to visualize the vessels around the brain and provide quantitative measures for pathological changes such as arteriovenous malformations and aneurysms. However, it’s difficult for patients and young clinical stuff to tell the name of each vessel in the angiography. This project plans to create an automatic cerebral vessel name annotation tool based on deep learning. Semantic segmentation and topological skeleton extraction are both possible to realize automatic vessel names annotation, but the ground truth for semantic segmentation is relatively hard to get and pixel-wise perfect is not required for this task, so skeleton extraction is chosen. The project could be divided into two part: 1) angiography acquirement angle classification; 2) Vessel bifurcation detection. Different cerebral vascular model is used for angiography annotation of different projection angles, and the second part of the project predicts vessel names.

Presenter 2: Chaoran Fan

Title: Fast and accurate CNN-based brushing in scatterplots

Abstract: Brushing plays a central role in most modern visual analytics solutions and effective and efficient techniques for data selection are key to establishing a successful human-computer dialogue. With this paper, we address the need for brushing techniques that are both fast, enabling a fluid interaction in visual data exploration and analysis, and also accurate, i.e., enabling the user to effectively select specific data subsets, even when their geometric delimination is non-trivial. We present a new solution for a near-perfect sketch-based brushing technique, where we exploit a convolutional neural network (CNN) for estimating the intended data selection from a fast and simple click-and-drag interaction and from the data distribution in the visualization. Our key contributions include a drastically reduced error rate—now below 3%, i.e., less than half of the so far best accuracy— and an extension to a larger variety of selected data subsets, going beyond previous limitations due to linear estimation models.