Home > Research > Publications & Outputs > Trusted Semi-Supervised Multi-View Classificati...

Links

Text available via DOI:

View graph of relations

Trusted Semi-Supervised Multi-View Classification With Contrastive Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Xiaoli Wang
  • Yongli Wang
  • Yupeng Wang
  • Anqi Huang
  • Jun Liu
Close
<mark>Journal publication date</mark>31/12/2024
<mark>Journal</mark>IEEE Transactions on Multimedia
Volume26
Number of pages11
Pages (from-to)8268-8278
Publication StatusPublished
Early online date19/03/24
<mark>Original language</mark>English

Abstract

Semi-supervised multi-view learning is a remarkable but challenging task. Existing semi-supervised multi-view classification (SMVC) approaches mainly focus on performance improvement while ignoring decision reliability, which limits their deployment in safety-critical applications. Although several trusted multi-view classification methods are proposed recently, they rely on manual annotations. Therefore, this work emphasizes trusted multi-view classification learning under semi-supervised conditions. Different from existing SMVC methods, this work jointly models class probabilities and uncertainties based on evidential deep learning to formulate view-specific opinions. Moreover, unlike previous works that explore cross-view consistency in a single schema, this work proposes a multi-level consistency constraint. Specifically, we explore instance-level consistency on the view-specific representation space and category-level consistency on opinions from multiple views. Our proposed trusted graph-based contrastive loss nicely establishes the relationship between joint opinions and view-specific representations, which enables view-specific representations to enjoy a good manifold to improve classification performance. Overall, the proposed approach provides reliable and superior semi-supervised multi-view classification decisions. Extensive experiments demonstrate the effectiveness, reliability and robustness of the proposed model.