Home > Research > Publications & Outputs > Uncertainty Modeling for Out-of-Distribution Ge...

Links

View graph of relations

Uncertainty Modeling for Out-of-Distribution Generalization

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Uncertainty Modeling for Out-of-Distribution Generalization. / Li, Xiaotong; Dai, Yongxing; Ge, Yixiao et al.
The Tenth International Conference on Learning Representations (Virtual). ICLR, 2022. p. 1-16.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Li, X, Dai, Y, Ge, Y, Liu, J, Shan, Y & Duan, L-Y 2022, Uncertainty Modeling for Out-of-Distribution Generalization. in The Tenth International Conference on Learning Representations (Virtual). ICLR, pp. 1-16. <https://openreview.net/pdf?id=6HN7LHyzGgC>

APA

Li, X., Dai, Y., Ge, Y., Liu, J., Shan, Y., & Duan, L.-Y. (2022). Uncertainty Modeling for Out-of-Distribution Generalization. In The Tenth International Conference on Learning Representations (Virtual) (pp. 1-16). ICLR. https://openreview.net/pdf?id=6HN7LHyzGgC

Vancouver

Li X, Dai Y, Ge Y, Liu J, Shan Y, Duan LY. Uncertainty Modeling for Out-of-Distribution Generalization. In The Tenth International Conference on Learning Representations (Virtual). ICLR. 2022. p. 1-16 Epub 2022 Jan 28.

Author

Li, Xiaotong ; Dai, Yongxing ; Ge, Yixiao et al. / Uncertainty Modeling for Out-of-Distribution Generalization. The Tenth International Conference on Learning Representations (Virtual). ICLR, 2022. pp. 1-16

Bibtex

@inproceedings{9d664cb988b44fd69d0d4aea082d39c9,
title = "Uncertainty Modeling for Out-of-Distribution Generalization",
abstract = "Though remarkable progress has been achieved in various vision tasks, deep neural networks still suffer obvious performance degradation when tested in out-of-distribution scenarios. We argue that the feature statistics (mean and standard deviation), which carry the domain characteristics of the training data, can be properly manipulated to improve the generalization ability of deep learning models. Common methods often consider the feature statistics as deterministic values measured from the learned features and do not explicitly consider the uncertain statistics discrepancy caused by potential domain shifts during testing. In this paper, we improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training. Specifically, we hypothesize that the feature statistic, after considering the potential uncertainties, follows a multivariate Gaussian distribution. Hence, each feature statistic is no longer a deterministic value, but a probabilistic point with diverse distribution possibilities. With the uncertain feature statistics, the models can be trained to alleviate the domain perturbations and achieve better robustness against potential domain shifts. Our method can be readily integrated into networks without additional parameters. Extensive experiments demonstrate that our proposed method consistently improves the network generalization ability on multiple vision tasks, including image classification, semantic segmentation, and instance retrieval. The code can be available at https://github.com/lixiaotong97/DSU.",
author = "Xiaotong Li and Yongxing Dai and Yixiao Ge and Jun Liu and Ying Shan and Ling-Yu Duan",
year = "2022",
month = apr,
day = "25",
language = "English",
pages = "1--16",
booktitle = "The Tenth International Conference on Learning Representations (Virtual)",
publisher = "ICLR",

}

RIS

TY - GEN

T1 - Uncertainty Modeling for Out-of-Distribution Generalization

AU - Li, Xiaotong

AU - Dai, Yongxing

AU - Ge, Yixiao

AU - Liu, Jun

AU - Shan, Ying

AU - Duan, Ling-Yu

PY - 2022/4/25

Y1 - 2022/4/25

N2 - Though remarkable progress has been achieved in various vision tasks, deep neural networks still suffer obvious performance degradation when tested in out-of-distribution scenarios. We argue that the feature statistics (mean and standard deviation), which carry the domain characteristics of the training data, can be properly manipulated to improve the generalization ability of deep learning models. Common methods often consider the feature statistics as deterministic values measured from the learned features and do not explicitly consider the uncertain statistics discrepancy caused by potential domain shifts during testing. In this paper, we improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training. Specifically, we hypothesize that the feature statistic, after considering the potential uncertainties, follows a multivariate Gaussian distribution. Hence, each feature statistic is no longer a deterministic value, but a probabilistic point with diverse distribution possibilities. With the uncertain feature statistics, the models can be trained to alleviate the domain perturbations and achieve better robustness against potential domain shifts. Our method can be readily integrated into networks without additional parameters. Extensive experiments demonstrate that our proposed method consistently improves the network generalization ability on multiple vision tasks, including image classification, semantic segmentation, and instance retrieval. The code can be available at https://github.com/lixiaotong97/DSU.

AB - Though remarkable progress has been achieved in various vision tasks, deep neural networks still suffer obvious performance degradation when tested in out-of-distribution scenarios. We argue that the feature statistics (mean and standard deviation), which carry the domain characteristics of the training data, can be properly manipulated to improve the generalization ability of deep learning models. Common methods often consider the feature statistics as deterministic values measured from the learned features and do not explicitly consider the uncertain statistics discrepancy caused by potential domain shifts during testing. In this paper, we improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training. Specifically, we hypothesize that the feature statistic, after considering the potential uncertainties, follows a multivariate Gaussian distribution. Hence, each feature statistic is no longer a deterministic value, but a probabilistic point with diverse distribution possibilities. With the uncertain feature statistics, the models can be trained to alleviate the domain perturbations and achieve better robustness against potential domain shifts. Our method can be readily integrated into networks without additional parameters. Extensive experiments demonstrate that our proposed method consistently improves the network generalization ability on multiple vision tasks, including image classification, semantic segmentation, and instance retrieval. The code can be available at https://github.com/lixiaotong97/DSU.

M3 - Conference contribution/Paper

SP - 1

EP - 16

BT - The Tenth International Conference on Learning Representations (Virtual)

PB - ICLR

ER -