Home > Research > Publications & Outputs > Few-shot 3D Point Cloud Segmentation via Relati...

Electronic data

  • TMM_final_file

    Accepted author manuscript, 4.74 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print

Standard

Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes. / Wei, L.; Lang, C.; Xu, Z. et al.
In: IEEE Transactions on Multimedia, 03.04.2025.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Wei, L., Lang, C., Xu, Z., Liang, L., & Liu, J. (2025). Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes. IEEE Transactions on Multimedia. Advance online publication. https://doi.org/10.1109/TMM.2025.3557699

Vancouver

Wei L, Lang C, Xu Z, Liang L, Liu J. Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes. IEEE Transactions on Multimedia. 2025 Apr 3. Epub 2025 Apr 3. doi: 10.1109/TMM.2025.3557699

Author

Wei, L. ; Lang, C. ; Xu, Z. et al. / Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes. In: IEEE Transactions on Multimedia. 2025.

Bibtex

@article{4bbef2feae344dcdbe67e0af3f6623f8,
title = "Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes",
abstract = "Few-shot 3D point cloud semantic segmentation is a challenging task due to the lack of labeled point clouds (support set). To segment unlabeled query point clouds, existing prototype-based methods learn 3D prototypes from point features of the support set and then measure their distances to the query points. However, such homogeneous 3D prototypes are often of low quality because they overlook the valuable heterogeneous information buried in the support set, such as semantic labels and projected 2D depth maps. To address this issue, in this paper, we propose a novel Relation Consistency-guided Heterogeneous Prototype learning framework (RCHP), which improves prototype quality by integrating heterogeneous information using large multi-modal models (e.g. CLIP). RCHP achieves this through two core components: Heterogeneous Prototype Generation module which collaborates with 3D networks and CLIP to generate heterogeneous prototypes, and Heterogeneous Prototype Fusion module which effectively fuses heterogeneous prototypes to obtain high-quality prototypes. Furthermore, to bridge the gap between heterogeneous prototypes, we introduce a Heterogeneous Relation Consistency loss, which transfers more reliable inter-class relations (i.e., inter-prototype relations) from refined prototypes to heterogeneous ones. Extensive experiments conducted on five point cloud segmentation datasets, including four indoor datasets (S3DIS, ScanNet, SceneNN, NYU Depth V2) and one outdoor dataset (Semantic3D), demonstrate the superiority and generalization capability of our method, outperforming state-of-the-art approaches across all datasets. The code will be released as soon as the paper is accepted.",
author = "L. Wei and C. Lang and Z. Xu and L. Liang and J. Liu",
year = "2025",
month = apr,
day = "3",
doi = "10.1109/TMM.2025.3557699",
language = "English",
journal = "IEEE Transactions on Multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - JOUR

T1 - Few-shot 3D Point Cloud Segmentation via Relation Consistency-guided Heterogeneous Prototypes

AU - Wei, L.

AU - Lang, C.

AU - Xu, Z.

AU - Liang, L.

AU - Liu, J.

PY - 2025/4/3

Y1 - 2025/4/3

N2 - Few-shot 3D point cloud semantic segmentation is a challenging task due to the lack of labeled point clouds (support set). To segment unlabeled query point clouds, existing prototype-based methods learn 3D prototypes from point features of the support set and then measure their distances to the query points. However, such homogeneous 3D prototypes are often of low quality because they overlook the valuable heterogeneous information buried in the support set, such as semantic labels and projected 2D depth maps. To address this issue, in this paper, we propose a novel Relation Consistency-guided Heterogeneous Prototype learning framework (RCHP), which improves prototype quality by integrating heterogeneous information using large multi-modal models (e.g. CLIP). RCHP achieves this through two core components: Heterogeneous Prototype Generation module which collaborates with 3D networks and CLIP to generate heterogeneous prototypes, and Heterogeneous Prototype Fusion module which effectively fuses heterogeneous prototypes to obtain high-quality prototypes. Furthermore, to bridge the gap between heterogeneous prototypes, we introduce a Heterogeneous Relation Consistency loss, which transfers more reliable inter-class relations (i.e., inter-prototype relations) from refined prototypes to heterogeneous ones. Extensive experiments conducted on five point cloud segmentation datasets, including four indoor datasets (S3DIS, ScanNet, SceneNN, NYU Depth V2) and one outdoor dataset (Semantic3D), demonstrate the superiority and generalization capability of our method, outperforming state-of-the-art approaches across all datasets. The code will be released as soon as the paper is accepted.

AB - Few-shot 3D point cloud semantic segmentation is a challenging task due to the lack of labeled point clouds (support set). To segment unlabeled query point clouds, existing prototype-based methods learn 3D prototypes from point features of the support set and then measure their distances to the query points. However, such homogeneous 3D prototypes are often of low quality because they overlook the valuable heterogeneous information buried in the support set, such as semantic labels and projected 2D depth maps. To address this issue, in this paper, we propose a novel Relation Consistency-guided Heterogeneous Prototype learning framework (RCHP), which improves prototype quality by integrating heterogeneous information using large multi-modal models (e.g. CLIP). RCHP achieves this through two core components: Heterogeneous Prototype Generation module which collaborates with 3D networks and CLIP to generate heterogeneous prototypes, and Heterogeneous Prototype Fusion module which effectively fuses heterogeneous prototypes to obtain high-quality prototypes. Furthermore, to bridge the gap between heterogeneous prototypes, we introduce a Heterogeneous Relation Consistency loss, which transfers more reliable inter-class relations (i.e., inter-prototype relations) from refined prototypes to heterogeneous ones. Extensive experiments conducted on five point cloud segmentation datasets, including four indoor datasets (S3DIS, ScanNet, SceneNN, NYU Depth V2) and one outdoor dataset (Semantic3D), demonstrate the superiority and generalization capability of our method, outperforming state-of-the-art approaches across all datasets. The code will be released as soon as the paper is accepted.

U2 - 10.1109/TMM.2025.3557699

DO - 10.1109/TMM.2025.3557699

M3 - Journal article

JO - IEEE Transactions on Multimedia

JF - IEEE Transactions on Multimedia

SN - 1520-9210

ER -