Changfeng Ma
PhD Student
PhD Student, NJU Meta Graphics & 3D Vision Lab, Computer Science and Technology, Nanjing University, Nanjing, China, 210023.
Supervisor: Prof. Yanwen Guo
ORCID: 0000-0001-8732-7038
I received my bachelor's degree (2021) from the department of Computer Science and Technology (Honored Class) at Nanjing University. I am currently working toward the PhD degree in the department of Computer Science and Technology at Nanjing University. My research interests include 3D computer vision and point cloud understanding.
(NIPS2022) Chanfeng Ma, Yang Yang, Jie Guo, Fei Pan, Chongjun Wang, Yanwen Guo. Unsupervised Point Cloud Completion and Segmentation by Generative Adversarial Autoencoding Network. [paper][code] More
ABSTRACT: Most existing point cloud completion methods assume the input partial point cloud is clean, which is not practical in practice, and are Most existing point cloud completion methods assume the input partial point cloud is clean, which is not the case in practice, and are generally based on supervised learning. In this paper, we present an unsupervised generative adversarial autoencoding network, named UGAAN, which completes the partial point cloud contaminated by surroundings from real scenes and cutouts the object simultaneously, only using artificial CAD models as assistance. The generator of UGAAN learns to predict the complete point clouds on real data from both the discriminator and the autoencoding process of artificial data. The latent codes from generator are also fed to discriminator which makes encoder only extract object features rather than noises. We also devise a refiner for generating better complete cloud with a segmentation module to separate the object from background. We train our UGAAN with one real scene dataset and evaluate it with the other two. Extensive experiments and visualization demonstrate our superiority, generalization and robustness. Comparisons against the previous method show that our method achieves the state-of-the-art performance on unsupervised point cloud completion and segmentation on real data.
TL;DR: We propose an unsupervised method for point cloud completion and segmentation.
(CVPR2023) Chanfeng Ma, Yinuo Chen, Pengxiao Guo, Jie Guo, Chongjun Wang, Yanwen Guo. Symmetric Shape-Preserving Autoencoder for Unsupervised Real Scene Point Cloud Completion. [paper][code] More
ABSTRACT: Unsupervised completion of real scene objects is of vital importance but still remains extremely challenging in preserving input shapes, predicting accurate results, and adapting to multi-category data. To solve these problems, we propose in this paper an Unsupervised Symmetric Shape-Preserving Autoencoding Network, termed USSPA, to predict complete point clouds of objects from real scenes. One of our main observations is that many natural and man-made objects exhibit significant symmetries. To accommodate this, we devise a symmetry learning module to learn from those objects and to preserve structural symmetries. Starting from an initial coarse predictor, our autoencoder refines the complete shape with a carefully designed upsampling refinement module. Besides the discriminative process on the latent space, the discriminators of our USSPA also take predicted point clouds as direct guidance, enabling more detailed shape prediction. Clearly different from previous methods which train each category separately, our USSPA can be adapted to the training of multi-category data in one pass through a classifier-guided discriminator, with consistent performance on single category. For more accurate evaluation, we contribute to the community a real scene dataset with paired CAD models as ground truth. Extensive experiments and comparisons demonstrate our superiority and generalization and show that our method achieves state-of-the-art performance on unsupervised completion of real scene objects.
TL;DR: We propose an unsupervised method and an evaluation method for unsupervised (unpaired) real scene point cloud completion and achieve SOTA performance.
(TVCG accpeted 2023-10-18) Chanfeng Ma, Yang Yang, Jie Guo, Mingqiang Wei, Chongjun Wang, Yanwen Guo, Wenping Wang. Collaborative Completion and Segmentation for Partial Point Clouds with Outliers. [paper] [code (coming soon) (email me if you need)] More
ABSTRACT: Outliers will inevitably creep into the captured point cloud during 3D scanning, degrading cutting-edge models on various geometric tasks heavily. This paper looks at an intriguing question that whether point cloud completion and segmentation can promote each other to defeat outliers. To answer it, we propose a collaborative completion and segmentation network, termed CS-Net, for partial point clouds with outliers. Unlike most of existing methods, CS-Net does not need any clean (or say outlier-free) point cloud as input or any outlier removal operation. CS-Net is a new learning paradigm that makes completion and segmentation networks work collaboratively. With a cascaded architecture, our method refines the prediction progressively. Specifically, after the segmentation network, a cleaner point cloud is fed into the completion network. We design a novel completion network which harnesses the labels obtained by segmentation together with farthest point sampling to purify the point cloud and leverages KNN-grouping for better generation. Benefited from segmentation, the completion module can utilize the filtered point cloud which is cleaner for completion. Meanwhile, the segmentation module is able to distinguish outliers from target objects more accurately with the help of the clean and complete shape inferred by completion. Besides the designed collaborative mechanism of CS-Net, we establish a benchmark dataset of partial point clouds with outliers. Extensive experiments show clear improvements of our CS-Net over its competitors, in terms of outlier robustness and completion accuracy.
TL;DR: We propose a Collaborative Completion and Segmentation network for point clouds with noises to predict more accurate completion results.
Name: Changfeng Ma
E-mail: changfengma@smail.nju.edu.cn, njumcf@126.com
Github: murcherful
If you can't get access to the pdf, please email me.