Name: Ziya Erkoc
Position: Ph.D Candidate
E-Mail: ziya.erkoc@tum.de
Phone: TBD
Room No: 02.07.041

Bio

Hi, my name is Ziya. I received my BSc and MSc in Computer Engineering degrees from Bilkent University, Turkey. I've previously worked on memory-efficient tetrahedral mesh generation. During my bachelor's years, I participated in Google Summer of Code as a contributor to the cBioPortal for Cancer Genomics tool. In addition, I worked at Taleworlds Entertainment as a software engineering intern. Homepage

Research Interest

3D Reconstruction, Generative Modelling, Geometry Processing, Geometric Deep Learning

Publications

2025

MeshPad: Interactive Sketch-Conditioned Artist-Designed Mesh Generation and Editing
Haoxuan Li, Ziya Erkoç, Lei Li, Daniele Sirigatti, Vladislav Rosov, Angela Dai, Matthias Nießner
ICCV 2025
MeshPad is a generative system for creating and editing 3D triangle meshes from sketch inputs. Designed for interactive workflows, it allows users to iteratively delete and add mesh parts through simple sketch edits. MeshPad uses a Transformer-based triangle sequence model with fast speculative prediction, achieving significantly better accuracy and user preference than prior methods.
[video][bibtex][project page]

PrEditor3D: Fast and Precise 3D Shape Editing
Ziya Erkoç, Can Gümeli, Chaoyang Wang, Matthias Nießner, Angela Dai, Peter Wonka, Hsin-Ying Lee, Peiye Zhuang
CVPR 2025
We propose a training-free approach to 3D editing that enables the editing of a single shape within a few minutes. The edited 3D mesh aligns well with the prompts, and remains identical for regions that are not intended to be altered. Extensive experiments demonstrate the superiority of our method over previous approaches, enabling fast, high-quality editing while preserving unintended regions.
[video][bibtex][project page]

2023

HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion
Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV 2023
We propose HyperDiffusion, a novel approach for unconditional generative modeling of implicit neural fields. HyperDiffusion operates directly on MLP weights and generates new neural implicit fields encoded by synthesized MLP parameters. It enables diffusion modeling over a implicit, compact, and yet high-fidelity representation of complex signals across 3D shapes and 4D mesh animations within one single unified framework.
[video][bibtex][project page]