Name: Aljaž Božič
Position: Ph.D Candidate
Phone: +49-89-289-18489
Room No: 02.13.040


I am Aljaž. I did a Bachelor of Mathematics in Slovenia and Master of Informatics at TUM. I previously worked on Simultaneous Localization and Mapping (SLAM), where we estimated camera motion and reconstructed static scenes from monocular videos. My PhD topic goes one step further by tracking and reconstructing non-rigidly deforming objects in dynamic environments, using a single RGB camera.

Research Interest

Non-rigid 3D reconstruction, deep learning with 4D shapes (in spatial and temporal domain), real-time optimization.



Neural Non Rigid Tracking
Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Nießner
NeurIPS 2020
We introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction. By enabling gradient back-propagation through a non-rigid as-rigid-as-possible optimization solver, we are able to learn correspondences in an end-to-end manner such that they are optimal for the task of non-rigid tracking
[video][bibtex][project page]

Learning to Optimize Non-Rigid Tracking
Yang Li, Aljaž Božič, Tianwei Zhang, Yanli Ji, Tatsuya Harada, Matthias Nießner
CVPR 2020 (Oral)
We learn the tracking of non-rigid objects by differentiating through the underlying non-rigid solver. Specifically, we propose ConditionNet which learns to generate a problem-specific preconditioner using a large number of training samples from the Gauss-Newton update equation. The learned preconditioner increases PCG’s convergence speed by a significant margin.
[bibtex][project page]

DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data
Aljaž Božič, Michael Zollhöfer, Christian Theobalt, Matthias Nießner
CVPR 2020
We present a large dataset of 400 scenes, over 390,000 RGB-D frames, and 5,533 densely aligned frame pairs, and introduce a data-driven non-rigid RGB-D reconstruction approach using learned heatmap correspondences, achieving state-of-the-art reconstruction results on a newly established quantitative benchmark.
[video][code][bibtex][project page]