Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Towards More Reliable Confidence Estimation
AU - Qu, Haoxuan
AU - Foo, Lin Geng
AU - Li, Yanchao
AU - Liu, Jun
PY - 2023/11/1
Y1 - 2023/11/1
N2 - As a task that aims to assess the trustworthiness of the model's prediction output during deployment, confidence estimation has received much research attention recently, due to its importance for the safe deployment of deep models. Previous works have outlined two important characteristics that a reliable confidence estimation model should possess, i.e., the ability to perform well under label imbalance and the ability to handle various out-of-distribution data inputs. In this work, we propose a meta-learning framework that can simultaneously improve upon both characteristics in a confidence estimation model. Specifically, we first construct virtual training and testing sets with some intentionally designed distribution differences between them. Our framework then uses the constructed sets to train the confidence estimation model through a virtual training and testing scheme leading it to learn knowledge that generalizes to diverse distributions. Besides, we also incorporate our framework with a modified meta optimization rule, which converges the confidence estimator to flat meta minima. We show the effectiveness of our framework through extensive experiments on various tasks including monocular depth estimation, image classification, and semantic segmentation.
AB - As a task that aims to assess the trustworthiness of the model's prediction output during deployment, confidence estimation has received much research attention recently, due to its importance for the safe deployment of deep models. Previous works have outlined two important characteristics that a reliable confidence estimation model should possess, i.e., the ability to perform well under label imbalance and the ability to handle various out-of-distribution data inputs. In this work, we propose a meta-learning framework that can simultaneously improve upon both characteristics in a confidence estimation model. Specifically, we first construct virtual training and testing sets with some intentionally designed distribution differences between them. Our framework then uses the constructed sets to train the confidence estimation model through a virtual training and testing scheme leading it to learn knowledge that generalizes to diverse distributions. Besides, we also incorporate our framework with a modified meta optimization rule, which converges the confidence estimator to flat meta minima. We show the effectiveness of our framework through extensive experiments on various tasks including monocular depth estimation, image classification, and semantic segmentation.
KW - Confidence estimation
KW - distribution shift robustness
KW - meta-learning
U2 - 10.1109/TPAMI.2023.3291676
DO - 10.1109/TPAMI.2023.3291676
M3 - Journal article
C2 - 37399165
AN - SCOPUS:85164381315
VL - 45
SP - 13152
EP - 13169
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
SN - 0162-8828
IS - 11
ER -