Name: Aljaž Božič
Position: Ph.D Candidate
E-Mail: aljaz.bozic@tum.de
Phone: +49-89-289-18489
Room No: 02.13.040

Bio

I am Aljaž. I did a Bachelor of Mathematics in Slovenia and Master of Informatics at TUM. I previously worked on Simultaneous Localization and Mapping (SLAM), where we estimated camera motion and reconstructed static scenes from monocular videos. My PhD topic goes one step further by tracking and reconstructing non-rigidly deforming objects in dynamic environments, using a single RGB camera.

Research Interest

Non-rigid 3D reconstruction, deep learning with 4D shapes (in spatial and temporal domain), real-time optimization.

Publications

2022

RC-MVSNet: Unsupervised Multi-View Stereo with Neural Rendering
Di Chang, Aljaž Božič, Tong Zhang, Qingsong Yan, Yingcong Chen, Sabine Süsstrunk, Matthias Nießner
ECCV 2022
We introduce RC-MVSNet, a neural-rendering based unsupervised Multi-View Stereo 3D reconstruction approach. First, we leverage NeRF-like rendering to generate consistent photometric supervision for non-Lambertian surfaces in unsupervised MVS task. Second, we impose depth rendering consistency loss to refine the initial depth map predicted by naive photometric consistency loss. We also propose Gaussian-Uniform sampling to improve NeRF's ability to learn the geometry features close to the object surface, which overcomes occlusion artifacts present in existing approaches.
[bibtex][project page]

2021

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers
Aljaž Božič, Pablo Palafox, Justus Thies, Angela Dai, Matthias Nießner
NeurIPS 2021
We introduce TransformerFusion, a transformer-based 3D scene reconstruction approach. The input monocular RGB video frames are fused into a volumetric feature representation of the scene by a transformer network that learns to attend to the most relevant image observations, resulting in an accurate online surface reconstruction.
[video][bibtex][project page]

NPMs: Neural Parametric Models for 3D Deformable Shapes
Pablo Palafox, Aljaž Božič, Justus Thies, Matthias Nießner, Angela Dai
ICCV 2021
We propose Neural Parametric Models (NPMs), a learned alternative to traditional, parametric 3D models. 4D dynamics are disentangled into latent-space representations of shape and pose, leveraging the flexibility of recent developments in learned implicit functions. Once learned, NPMs enable optimization over the learned spaces to fit to new observations.
[video][code][bibtex][project page]

Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, Matthias Nießner
CVPR 2021 (Oral)
We introduce Neural Deformation Graphs for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects. Specifically, we implicitly model a deformation graph via a deep neural network and empose per-frame viewpoint consistency as well as inter-frame graph and surface consistency constraints in a self-supervised fashion.
[video][bibtex][project page]

2020

Neural Non Rigid Tracking
Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Nießner
NeurIPS 2020
We introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction. By enabling gradient back-propagation through a non-rigid as-rigid-as-possible optimization solver, we are able to learn correspondences in an end-to-end manner such that they are optimal for the task of non-rigid tracking
[video][bibtex][project page]

Learning to Optimize Non-Rigid Tracking
Yang Li, Aljaž Božič, Tianwei Zhang, Yanli Ji, Tatsuya Harada, Matthias Nießner
CVPR 2020 (Oral)
We learn the tracking of non-rigid objects by differentiating through the underlying non-rigid solver. Specifically, we propose ConditionNet which learns to generate a problem-specific preconditioner using a large number of training samples from the Gauss-Newton update equation. The learned preconditioner increases PCG’s convergence speed by a significant margin.
[bibtex][project page]

DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data
Aljaž Božič, Michael Zollhöfer, Christian Theobalt, Matthias Nießner
CVPR 2020
We present a large dataset of 400 scenes, over 390,000 RGB-D frames, and 5,533 densely aligned frame pairs, and introduce a data-driven non-rigid RGB-D reconstruction approach using learned heatmap correspondences, achieving state-of-the-art reconstruction results on a newly established quantitative benchmark.
[video][code][bibtex][project page]