Computer Graphics and Interactive Systems

Computer Graphics and
Interactive Systems.

Digital Fabrication. | Augmented and Virtual Reality. | Vision. | Machine Learning.

+49 30 4504 5217
Beuth University of Applied Science, FB VI, Luxemburger Str. 20a, D-13353 Berlin, Germany




Graphics and Interactive Systems.

Digital Fabrication. | Augmented and Virtual Reality. | Vision. | Machine Learning.


Prof. Dr.-Ing. Kristian Hildebrand

I received my PhD from Technische Universität Berlin in 2013 under the supervision of Prof. Dr.-Ing. Marc Alexa. I am holding a Diploma from the Computer Science and Media programme of the Bauhaus University Weimar. During my studies I was visiting the University of British Columbia in Vancouver and the Max-Planck-Institut Informatik in Saarbrücken. From 2006-2008 I was working as a software developer at art+com AG in Berlin where I participated in the development of several innovative media installations. From August - October 2012 I have been visiting Disney Research Zurich. Between 2014 and 2015 I worked as Principal Research Engineer at DISDAR GmbH developing web-based machine learning technologies. I am also co-founder of the art technology platform kunstmatrix. My research interests span Digital Fabrication, Computer Graphics, Computer Vision and Machine Learning.

Teaching. Thesis.

IMPORTANT: Notes on writing a thesis with me as your supervisor

You would like to write your thesis with me as your supervisor? Please read and follow these notes.

Interactive Systems / Computer Vision

This master course focuses on topics related to Computer Vision. Students will learn about and implement selected topics such as AR marker tracking, image-retrieval methods / machine learning approaches and 3D reconstruction. Exercises will use the OpenCV library. More information can be found here.

Visual and scientific computing

This bachelor course focuses on important methods in computer science. Students will learn about selected topics in machine learning, image processing and computer graphics by very practical examples. In mini-projects topics such as computer tomography, face recognition, image processing, optimization and neural networks will be explained and implemented - themed: It's not rocket science. It's fun science. More information can be found here.

Introduction to Scientific Work

Introduction to Scientific Work. Prepare yourself to write your master thesis. We will discuss about questions like: What makes a good paper a good paper? What is scientific work? What makes a good talk? What is good writing? What is bad writing? How does a conference work? And many more. More information can be found here.

Game Programming

The course focuses on developing a series of small games from the idea to its implementation. Students will learn about a varity of technical aspects of game engines and implement their own games in small teams.


Starcraft II Game AI

In recent years a variety of frameworks enable games and simulations to serve as environments for training intelligent agents using reinforcement learning algorithms. The goal of this project was to train game agents for the StarCraft II game environment based on Deepminds PySC2 API. Specifically, we provide scenarios commonly found in real-world competitive 1 vs 1 modes of the game (small-scale combat scenarios). Using these scenarios complex models are trained that achieve results comparable to human players and, to the best of our knowledge for the first time, even manage to score higher rewards than expert players in a competitive scenario.

Sketch classification using Deep Neural Networks

Sketching enables users to depict the visual world and communicate when written or spoken language is not an option. In addition, mobile touchscreen devices support this communication ubiquitously and thereby provide users with an exceptional intuitive human-computer interface. In this work we present a system that has learned how to recognize and interpret human input sketches. Based on recent state-of-the-art machine learning research we achieve classification results of 80.22% that outperforms previous works and human recognition accuracy (73,10%).

Optimal Discrete Slicing

Slicing is the procedure necessary to prepare a shape for layered manufacturing. There are degrees of freedom in this process, such as the starting point of the slicing sequence and the thickness of each slice. The choice of these parameters influences the manufacturing process and its result: The number of slices significantly affects the time needed for manufacturing, while their thickness affects the error. Assuming a discrete setting, we measure the error as the number of voxels that are incorrectly assigned due to slicing. We provide an algorithm that generates, for a given set of available slice heights and a shape, a slicing that is provably optimal. We demonstrate the practical importance of our optimization on several three-dimensional-printed results.

Sketch-based Pipeline for Mass Customization

We present a novel application workflow to physically produce personalized objects by relying on the sketch-based input metaphor. This is achieved by combining different sketch-based retrieval and modeling aspects and optimizing the output for 3D printing technologies. The workflow starts from a user drawn 2D sketch that is used to query a large 3D shape database. A simple but powerful sketch-based modeling technique is employed to modify the result from the query. Taking into account the limitations of the additive manufacturing process we define a fabrication constraint deformation to produce personalized 3D printed objects.

Orthogonal Slicing for Additive Manufacturing

Most additive manufacturing technologies work by slicing the shape and then generating each slice independently. This introduces an anisotropy into the process, often as different accuracies in the tangential and normal directions. We model this as an anisotropic cubic element. Our approach then finds a compromise between modeling each part of the shape individually in the best possible direction and using one direction for the whole shape part. Then we optimize a decomposition of the shape along this basis so that each part can be consistently sliced along one of the basis vectors. In simulation, we show that this approach is superior to slicing the whole shape in one direction, only.

crdbrd: Shape Fabrication by Sliding Planar Slices

We introduce an algorithm and representation for fabricating 3D shape abstractions using mutually intersecting planar cut-outs. The planes have prefabricated slits at their intersections and are assembled by sliding them together. Based on an analysis of construction rules, we propose an extended binary space partitioning tree as an efficient representation of such cardboard models which allows us to quickly evaluate the feasibility of newly added planar elements. The complexity of insertion order quickly increases with the number of planar elements and manual analysis becomes intractable. We provide tools for generating cardboard sculptures with guaranteed constructibility.

Sketch-based Shape Retrieval

We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.

Sketch-based Image Retrieval

For most image databases, browsing as a means of retrieval is impractical, and query based searching is required. Queries are often expressed as keywords (or by other means than the images themselves), requiring the images to be tagged. In view of the ever increasing size of image databases, the assumption of an appropriate and complete set of tags might be invalid, and content based search techniques become vital. We propose algorithms and specific image descriptors for sketch-based image retrieval.

Photosketcher: interactive sketch-based image synthesis

We introduce Photosketcher, an interactive system for progressively synthesizing novel images using only sparse user sketches as the input. Compared to existing approaches for synthesising images from parts of other images, Photosketcher works on the image content exclusively, no keywords or other metadata associated with the images is required. Users sketch the rough shape of a desired image part and we automatically search a large collection of images for images containing that part. The search is based on a bag-of-features approach using local descriptors for translation invariant part retrieval. The compositing step again is based on user scribbles: from the scribbles we predict the desired part using Gaussian Mixture Models and compute an optimal seam using Graphcut.

Reflection Nebula Visualization

We describe here an interactive visualization tool for realistically rendering the appearance of arbitrary 3D dust distributions surrounding one or more illuminating stars. Our rendering algorithm is based on the physical models used in astrophysics research. The tool can be used to create virtual fly-throughs of reflection nebulae for interactive desktop visualizations, or to produce scientifically accurate animations in planetarium shows. The algorithm is also applicable to investigate on-the-fly the visual effects of physical parameter variations, exploiting visualization technology to help gain a deeper and more intuitive understanding of the complex interaction of light and dust in real astrophysical settings.

PRISAD: a partitioned rendering infrastructure for scalable accordion drawing

We use an information visualization technique called 'accordion drawing' that guarantees three key properties: context, visibility, and frame rate. We provide context through the navigation metaphor of a rubber sheet that can be smoothly stretched to show more details in the areas of focus, while the surrounding regions of context are correspondingly shrunk. Landmarks, such as user specified motifs or differences between aligned base pairs across multiple sequences, are guaranteed to be visible even if located in the shrunken areas of context. Our graphics infrastructure for progressive rendering provides immediate responsiveness to user interaction by guaranteeing that we redraw the scene at a target frame rate.



{{p.title}} {{p.conference}} {{p.authors}}