RobustPoIntNet++: EnhancIng PoIntNet++ wIth NoIse-FIlterIng and Learned AugmentatIon for Sparse LIDAR Data

Abstract
Robust 3D perception in real-world applications demands models that generalize to sparse and noisy point clouds typical of LiDAR sensors. PointNet++ has achieved state-of the-art performance by hierarchically capturing local geometric features, but its robustness under significant corruption remains underexplored. In this work, we integrate noise-filtering and learnable data-augmentation strategies directly into the multi scale abstraction layers of PointNet++, creating four model variants (no augmentation, jitter only, dropout only, and combined jitter+dropout). We evaluate performance on the ModelNet40 classification benchmark under clean and synthetically corrupted conditions. Our combined-augmentation model maintains 85.68% accuracy on noisy test sets-surpassing the base network’s 82.13%-while sacrificing less than 0.66% on clean data. These results demonstrate that in-network augmentation can effectively regularize hierarchical feature learning, improving robustness to point sparsity and sensor noise. Code is available at https://github.com/ayusefi/point-cloud-learning.git

Reference:
Abdullah Yusefi, İbrahim TOY, and Akif DURDU, “RobustPointNet++: Enhancing PointNet++ with Noise-Filtering and Learned Augmentation for Sparse LiDAR Data”, 2025 14th International Symposium on Advanced Topics in Electrical Engineering (ATEE), October 9-11, 2025, Bucharest, Romania, pp. 1-6, doi:10.1109/ATEE66006.2025.11299989 (IEEE Xplore)

Link: https://doi.org/10.1109/ATEE66006.2025.11299989