You are cordially invited to attend the midterm master project presentations on Friday, 28 May starting 14:30. The session will be on Zoom (meeting detail available on request).
The session on 28 May features one speaker listed below and will take about 45 minutes.
Speaker: Kushal Prakash
Title: A hybrid framework for multi-agent communication and strategizing in AI Worldcup simulation
Abstract: AI football competitions are gaining increasing popularity due to their multi-agent ecosystem and the possibility of establishing communication between the agents with a hybrid approach. AI worldcup is a relatively newer competition involving a team of five simulated robots. Most teams in the recent edition of the competition had poorly executed moves and lacked team coordination. This makes it necessary for a framework which can solve most elementary issues easing the development of new strategies. In this work, we create a hybrid framework which closely resembles the functioning of an actual football team and allows for new strategies to be developed using Machine learning/algorithmic methods. Further, we will also investigate on gameplay and strategy development.
You are cordially invited to attend the midterm master project presentations on Friday, 16 April starting 14:30. The session will be on Zoom (meeting detail available on request).
The session on 16 April features the two speakers listed below and will take about 1.5 hours.
Speaker: Mika Kuijpers
Speaker: Zehao Jing
Abstract: Diffusion Curve is a vector graphics primitive created by diffusing the given colors of defined Bezier curves. Wang tiles that are squares with colored edges and edge colors of neighbor tiles should be the same are used for tiling the plane. We implement an approach for generating seamless and aperiodic textures based on diffusion curves and Wang Tiles, called the diffusion mosaic
You are cordially invited to attend the midterm master project presentations on Friday, 19 March starting 14:30. The session will be on Zoom (meeting detail available on request).
The session on 19 March features the three speakers listed below and will take about 1.5 hours.
Speaker 1: Wouter Raateland
Title: Interactive Wildfire Simulation in Mesoscale Plant Ecosystems
Abstract: Every year, more and larger wildfires occur. Simulations are used to study and predict the behavior of wildfires. Existing simulations at mesoscale lack detail. This work builds a detailed wildfire simulation at mesoscale on top of an existing ecosystem simulation. We implemented a fast numerical model for wood pyrolysis, and a GPU accelerated fluid simulation on an adaptive grid. This simulation can be used to study the effect of different plant distributions and soil and weather conditions on the behavior of wildfires.
Speaker 2: Pieter Kools
Title: Physics-based model for point-based sail reconstruction
Abstract: The Sailing Innovation Centre has been doing research into developing more optimal sail shapes for their sailing boats. Using models to simulate sail shapes, predictions can be made on what the shape of the sail is expected to be under certain conditions. An important step in this research is to measure how well the real life sail shape matches the expected sail shape from their model. In this thesis we propose a physics-based method to reconstruct a sail configuration from a known (possibly flexible) sail shape and a set of measured points on a real-life sail. We will also investigate the impact of the amount of points measured and their positions on the reconstruction result.
Speaker 3: Max Lopes Cunha
Title: Reduced Projective Skinning for real-time deformable characters
Abstract: Character skinning is the art and science of expressing the vertex displacements when a character takes a particular pose. Projective Skinning is a method capable of producing dynamic tissue motion and resolve (self-)collisions in real-time, which we can speed up further by formulating the physics simulation in a reduced space. In this work, we investigate how these subspaces can be derived from data and how to use them to add real-time skin deformation to humanoid characters.
You are cordially invited to attend the midterm master project presentations on Friday, 19 February starting 14:30. The session will be on Zoom (meeting detail available on request).
The session on 19 February features the three speakers listed below and will take about 1.5 hours.
Speaker 1: Nejc Maček
Title: Real-time relighting of human faces with a low-cost setup
Abstract: Relighting – a process defined as changing the appearance of a subject in an image under novel illumination conditions – often requires specialized equipment to produce believable results. We propose a method to capture an abstract relighting model with a low-cost setup using a smartphone camera. This model is used to perform relighting in real-time on a commodity computer.
Speaker 2: Zhoufan Jia
Title: Fast approximation of the inverse reflector problem
Abstract: Suppose we have a target radiance distribution, a light source and a plane for receiving light. How do we design a reflector that can give a similar result as the target radiance distribution? This is a problem of high interest for light designer and related industry, such as lamp manufactures. The inverse reflector problem can be summarized as a high-dimensional global optimization problem. Some existing algorithms are either not fully compatible with parallel acceleration, or with a too narrow application scope (can only deal with the far-field problem). We proposed a method for generating a fast approximation of the reflector’s inverse design, which can also serve as the initial guess for the a finer optimization.
Speaker 3: Matthias Tavasszy
Title: Real-Time Global Illumination using BRSM and Light Cuts
Abstract: Global illumination is light that bounces through an environment multiple times before it ends up at an observer, which is very computationally expensive to simulate. In order to approximate this effect in real time this work combines two previous works, Bidirectional Reflective Shadow Maps and Light Cuts in order to quickly generate, organize and sample Virtual Light Points for gathering second-bounce illumination at a given location. The program is implemented in Vulkan using RTX ray tracing for occlusion checks.
We have a PhD joining our group, Yang Chen, welcome!
Yang received his MSc in Computer Science(l’informatique, parcours Images, Games and intelligent agents) from University of Montpellier in 2020. He will be working under the supervision of Dr. Ricardo Marroquim in research on surface capturing.
We have two midterm master project presentations on Friday, 18 September starting 15:45. The session will be on Zoom.
Presenter: Berend Baas
Title: Latent shape editing
Abstract: In recent years, deep learning on shapes and manifolds has been used to try and perform a variety of tasks, such as classification, deformation transfer and shape matching. This is often done through architectures such as Autoencoders or Generative Adversarial Networks, that try to learn a vector representation of training shapes, which is then used for downstream tasks.
However, current trained representations are generally poorly structured: Their latent space consists of manifolds that are entangled and highly non-linear. This makes it difficult to predict the results of modifications in the latent space on the output of the network. In this work, we investigate the latent space of shape networks, to try and develop techniques to obtain semantic deformations from latent editing operations. We consider two approaches: developing techniques to navigate complex entangles latent spaces, and developing less entangled and more interpretable representations, that can help in providing semantic editing operations.
Presenter: Ruben Vroegindeweij
Title: Depicting motion in a still image by spatio-temporal image fusion
You are cordially invited to attend our CG Colloquium on Thursday, March 12th, 2020, 15:45-17:45h, Lecture Hall D@ta (Building 36)
The program features the following two speakers:
Title: 3D Human Pose Estimation Using a Top-view Depth Camera
Abstract: Delirium is a cause of concern within the health industry due to many postsurgery patients succumbing to this mental disease which disturbs their path to a full recovery. To understand and detect the onset of delirium within hospital ICU rooms, a depth camera (Microsoft Kinect) is attached to the ceiling. This depth data preserves privacy but also provides an opportunity to analyze the interactions taking place between the various stakeholders such as patient, hospital staff and visiting family. This project is being done at Philips Research, Eindhoven where my task is to extract the 3D human pose of individuals in the rooms. To this end, I extract the 3D point cloud data and run a supervised learning technique (i.e. 3D Convolutional Neural Network) to extract human pose. Having established a baseline, I am now investigating unsupervised and semi-supervised techniques to reduce the data and data annotation requirements respectively.
Title: Photoshop for dummies : Energy-based image modification for photography composition
Abstract: Cameras have almost reached the limits in terms of hardware and optics, computational methods are now the major way to improve a photograph. However, too few tools are developed to enhance image composition. In this project, we introduce new methods based on photography rules to help photographers to modify the picture’s composition. We present a general approach to image deformation based on the energy, and applications of this approach to the problems of photography composition. Our method is inspired from works found in the prior art. The key advantage of our operator is the content-aware deformation function, which optimizes the location of the pixels modification. The operator has been developed to change lines composition in photographs.