Home > Research > Publications & Outputs > MDCS

Links

Text available via DOI:

View graph of relations

MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition. / Zhao, Qihao; Jiang, Chen; Hu, Wei et al.
2023 IEEE/CVF International Conference on Computer Vision (ICCV). Institute of Electrical and Electronics Engineers Inc., 2024. p. 11563-11574 (Proceedings of the IEEE International Conference on Computer Vision).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Zhao, Q, Jiang, C, Hu, W, Zhang, F & Liu, J 2024, MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition. in 2023 IEEE/CVF International Conference on Computer Vision (ICCV). Proceedings of the IEEE International Conference on Computer Vision, Institute of Electrical and Electronics Engineers Inc., pp. 11563-11574, 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, 2/10/23. https://doi.org/10.1109/ICCV51070.2023.01065

APA

Zhao, Q., Jiang, C., Hu, W., Zhang, F., & Liu, J. (2024). MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 11563-11574). (Proceedings of the IEEE International Conference on Computer Vision). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCV51070.2023.01065

Vancouver

Zhao Q, Jiang C, Hu W, Zhang F, Liu J. MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). Institute of Electrical and Electronics Engineers Inc. 2024. p. 11563-11574. (Proceedings of the IEEE International Conference on Computer Vision). Epub 2023 Oct 1. doi: 10.1109/ICCV51070.2023.01065

Author

Zhao, Qihao ; Jiang, Chen ; Hu, Wei et al. / MDCS : More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition. 2023 IEEE/CVF International Conference on Computer Vision (ICCV). Institute of Electrical and Electronics Engineers Inc., 2024. pp. 11563-11574 (Proceedings of the IEEE International Conference on Computer Vision).

Bibtex

@inproceedings{cd7ebb5c300c44718eb76dc50fbaf2fe,
title = "MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition",
abstract = "Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts: (2) Lower model variance. However, the previous methods didn't handle them well. To this end, we propose More Diverse experts with Consistency Self-distillation (MDCS) to bridge the gap left by earlier methods. Our MDCS approach consists of two core components: Diversity Loss (DL) and Consistency Self-distillation (CS). In detail, DL promotes diversity among experts by controlling their focus on different categories. To reduce the model variance, we employ KL divergence to distill the richer knowledge of weakly augmented instances for the experts' self-distillation. In particular, we design Confident Instance Sampling (CIS) to select the correctly classified instances for CS to avoid biased/noisy knowledge. In the analysis and ablation study, we demonstrate that our method compared with previous work can effectively increase the diversity of experts, significantly reduce the variance of the model, and improve recognition accuracy. Moreover, the roles of our DL and CS are mutually reinforcing and coupled: the diversity of experts benefits from the CS, and the CS cannot achieve remarkable results without the DL. Experiments show our MDCS outperforms the state-of-the-art by 1% ~ 2% on five popular long-tailed benchmarks, including CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. The code is available at https://github.com/fistyee/MDCS",
author = "Qihao Zhao and Chen Jiang and Wei Hu and Fan Zhang and Jun Liu",
year = "2024",
month = jan,
day = "15",
doi = "10.1109/ICCV51070.2023.01065",
language = "English",
isbn = "9798350307191",
series = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "11563--11574",
booktitle = "2023 IEEE/CVF International Conference on Computer Vision (ICCV)",
note = "2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023 ; Conference date: 02-10-2023 Through 06-10-2023",

}

RIS

TY - GEN

T1 - MDCS

T2 - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023

AU - Zhao, Qihao

AU - Jiang, Chen

AU - Hu, Wei

AU - Zhang, Fan

AU - Liu, Jun

PY - 2024/1/15

Y1 - 2024/1/15

N2 - Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts: (2) Lower model variance. However, the previous methods didn't handle them well. To this end, we propose More Diverse experts with Consistency Self-distillation (MDCS) to bridge the gap left by earlier methods. Our MDCS approach consists of two core components: Diversity Loss (DL) and Consistency Self-distillation (CS). In detail, DL promotes diversity among experts by controlling their focus on different categories. To reduce the model variance, we employ KL divergence to distill the richer knowledge of weakly augmented instances for the experts' self-distillation. In particular, we design Confident Instance Sampling (CIS) to select the correctly classified instances for CS to avoid biased/noisy knowledge. In the analysis and ablation study, we demonstrate that our method compared with previous work can effectively increase the diversity of experts, significantly reduce the variance of the model, and improve recognition accuracy. Moreover, the roles of our DL and CS are mutually reinforcing and coupled: the diversity of experts benefits from the CS, and the CS cannot achieve remarkable results without the DL. Experiments show our MDCS outperforms the state-of-the-art by 1% ~ 2% on five popular long-tailed benchmarks, including CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. The code is available at https://github.com/fistyee/MDCS

AB - Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts: (2) Lower model variance. However, the previous methods didn't handle them well. To this end, we propose More Diverse experts with Consistency Self-distillation (MDCS) to bridge the gap left by earlier methods. Our MDCS approach consists of two core components: Diversity Loss (DL) and Consistency Self-distillation (CS). In detail, DL promotes diversity among experts by controlling their focus on different categories. To reduce the model variance, we employ KL divergence to distill the richer knowledge of weakly augmented instances for the experts' self-distillation. In particular, we design Confident Instance Sampling (CIS) to select the correctly classified instances for CS to avoid biased/noisy knowledge. In the analysis and ablation study, we demonstrate that our method compared with previous work can effectively increase the diversity of experts, significantly reduce the variance of the model, and improve recognition accuracy. Moreover, the roles of our DL and CS are mutually reinforcing and coupled: the diversity of experts benefits from the CS, and the CS cannot achieve remarkable results without the DL. Experiments show our MDCS outperforms the state-of-the-art by 1% ~ 2% on five popular long-tailed benchmarks, including CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. The code is available at https://github.com/fistyee/MDCS

U2 - 10.1109/ICCV51070.2023.01065

DO - 10.1109/ICCV51070.2023.01065

M3 - Conference contribution/Paper

AN - SCOPUS:85185866909

SN - 9798350307191

T3 - Proceedings of the IEEE International Conference on Computer Vision

SP - 11563

EP - 11574

BT - 2023 IEEE/CVF International Conference on Computer Vision (ICCV)

PB - Institute of Electrical and Electronics Engineers Inc.

Y2 - 2 October 2023 through 6 October 2023

ER -