|Christian Kehl, M.Eng.
TU Delft: EWI Building, Room HB 02.120 (INSYGHT Lab) and HB 11.240
e-mail <c.kehl AT tudelft.nl>
Curriculum Vitae (short, academic)
Curriculum Vitae (detailed)
I am a PhD candidate at the TU Delft, working on Large-Scale, Interactive Simulation and Visualization of Flooding Scenarios. I started this position in March 2012 under the supervision of Gerwin de Haan.
Before that, I did my Master Thesis project at the TU Delft, working on conformal, multi-material meshing schemes. I was studying Multimedia Engineering at the University of Applied Science Wismar from 2006 to 2011. From March to August 2011 I was studying Vision, Graphics and Interactive Systems at the Aalborg University. From September 2011 to March 2012 I was finishing my Master degree with a project at the TU Delft in the Medical Visualization group (see “Related Project”).
Remote Scene Modification
This trajectory of research is about parallel algorithms, implemented on-Chip, to modify (colour, displace, remove) large amounts of 3D data in real-time and remotely from several network sources. These techniques find their applications in, for example, decision-making for large user groups.
This work tries to give a framework of thought (with a particular implementation, for validation) on how to combine heterogeneous input devices into one abstract description, that is used to send the interaction commands consistently over the network. Although the work is similar to VRPN, we derive a different abstract description.
In this project, we develop develop techniques for splitting up the rendering surface and rendering each sub-screen in parallel (similar to Chromium and Equalizer). Our research focus is hereby on the rendering of 3D stereo-scenes on arbitrary projection setups. The correct configuration of the sub-screens is hereby the major challenge, because unsuitable projection setups result in problems when fusing the sub-screens and rendering the full-extent 3D scene.
Interactive Simulation and Visualization of Flooding Scenarios
Multi-Material mesh generation for noisy labeled volumes – DeVIDE FE-Mesher
3D Scanning and Reconstruction of Large Scale Environments
One task in the ﬁeld of virtual and augmented reality is the acquisition of 3D environmental scenes which can be used to integrate artiﬁcial three-dimensional objects. This is commonly done by using stereo camera systems for scanning the environment. The major drawback on these solutions is a high purchase cost for such systems. New technological solutions for three-dimensional scanning have emerged over recent years, such as Microsoft’s Kinect.
Therefore the aim in connection with the VR Laboratory of Aalborg University is to create 3D environmental scenes using low-cost depth scanning equipment. Connected with that task is the research on usability of low-cost equipment in terms of accuracy and noise susceptibility.
A variety of approaches has been analyzed and tested to solve each partial task ranging from calibration issues and pose tracking to 3D reconstruction, which are presented in this report. The ﬁnal system uses a point based tracking approach realized with SURF feature extraction, a minimum-correlation feature matching and Gauss-Newton iteration-based transformation parameter extraction. The memorylimiting task of large scale point cloud storage is solved with a dynamically growing hybrid hash map. A per-frame image-based reconstruction algorithm is used for in-place augmented reality. Full scan reconstruction is done using a parallelized version of Marching Cubes.
As a result the current implementation does not meet needed real-time requirements, leaving certain space for SIMD and MIMD parallelization optimizations.
The system is limited by high noise susceptibility and the need for highly accurate feature points, in particular in point feature-poor environments. These limitations are tolerable regarding the low cost of the system.
3D Scanning and Reconstruction of Large Scale Environments (Aalborg University – VGIS Semester Project) – Report
Depth image recognition using isomorphic graph theory
This was a group project to cope with the challenge of 6 Degree-of-Freedom (DoF) depth image registration when scanning 3D environments. Instead of using a 6 DoF SLAM framework, we extracted object information for registering subsequent environment scans, which is robust to noise (introduced by the scanner) and changes in the scanned scene. We therefore establish neighbourhood relations between regions in a depth image and represent these relations in a graph. Then, we try to match multiple scans via object correspondence and check for double subgraph isomorphism between their neighbourhood graphs. My contribution to this project was the realisation of the double subgraph isomorphism check.
Depth image recognition using isomorphic graph theory (Aalborg University – VGIS Graph Theory course project) – Presentation
MPEG-1 Part 2 GPU-based Encoder
The focus of this MPEG-1 Part 2 Implementation is the usage of the graphics adapter as high performance computing unit running all needed calculations.
During the recent year, a lot of the standard, high deﬁnition video encoding algorithms have been ported to the GPU for reasonable performance increase. There is currently no known GPU implementation of MPEG-1 Part 2, because of which this way of implementation was chosen.
The existing graphics processing units are capable of computing general purpose algorithms due to new chip architectures like CUDA (NVIDIA) and AMD Stream (AMD). On these architectures, former shader processors act as parallel SIMD processing units. By taking into account the amount of units resident on one GPU chips, this leads to a high amount of computing power. It is useful for video processing because the majority of applicable algorithms are highly parallizable. Images and computer graphics share a similar approach which ease up algorithm mapping processing each uncorrelated element in the n-dimensional space with linear algebra methods.
MPEG/-1 FRAMEWORK Full Report
Development of a GPGPU Video Encoding Server Application in a Multi-GPU environment
Since the release of the rst OpenCL-version, an emerging interest porting highly parallel, highly complex tasks to the GPU exist in many computing branches. One of the most pro table branches within GPGPU computing is image- and video processing. While being used for development of new desktop software based on OpenCL, present online video services are not using this technology so far. In times of low-performance, small-format computers like netbooks, display workstations and handhelds, the integration of GPGPU computing in multimedia server applications can be a signi cant push for web startups, gathering new users and markets. Therefore, the department of multimedia engineering at the University of Wismar has formed a small group to create a GPU-based video processing service on prototype level.
It has been created a C-written console server application based on OpenCL and OpenCV for fast video encoding and manipulation. The server application is controlled by a Silverlight RIA with modern layout as well as client-side video player and key frame extractor. Measurements have shown the signi cant performance advantages that prove this application to be a pointer to the right direction of server-side video processing.
Development of a GPGPU Video Encoding Server Application in a Multi-GPU environment (University of Applied Science Wismar – Master study project) – Tech Report
Research on Optimization of graphical Data Processing Systems in Multi-GPU Environments
Design and Implementation of a Web Interface for mobile IT-Applications
Currently mobile devices are steadily becoming more and more advanced. At the same time penetration (in western countries) of these advanced devices is reaching more than one device per person, creating a big market for mobile applications and services. Although 20 mobile phone manufacturers recently are seeing a decline in sales of basic phones, the market for smart phones, like Blackberries for Business people or the iPhone produced by Apple is steadily increasing.
One of the companies trying to profit from this shift in the mobile landscape is M2Mobi. The company was founded in September 2006, led by Michiel Munneke and Michiel Baneke. M2Mobi is specialized in software development for all kinds of mobile phones. One department in the company is dedicated specifically to developing web interfaces for mobile phones. The biggest challenge this department faces is dealing with all the different devices that are currently used.
The core product of M2Mobi is Nulaz, which is a platform for finding out what is happening around you. The main pillars of the platform were a J2ME mobile client and a web page aimed at regular PCs. There was also a mobile site, but it was very basic. Together with the rest of the mobile web team, it was decided not to continue with that old mobile site, but start from scratch. That way it was possible to properly outline requirements and specifications, and produce well documented and manageable code. After thorough testing, the new site was put live and supporting a wide variety of handsets.
Nulaz is a location-based social network. People are using Nulaz to see where friends and favorite locations are. Additionally, Nulaz is a content distributor. Several location-based RSS feeds are stored in the backend to provide users with information about restaurants, cinemas, sights, general information and much more in the area the user is. Consequently, it 40 gives an optimal overview of what is happening around the user. All this content is visualized in a map, which is the main aspect of the application.
See my complete publications list here