Home > Research > Publications & Outputs > AdaCare

Electronic data

Links

Text available via DOI:

View graph of relations

AdaCare: Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

AdaCare: Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration. / Ma, Liantao; Gao, Junyi; Wang, Yasha et al.
The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. AAAI, 2020. p. 825-832 (Proceedings of the AAAI Conference on Artificial Intelligence; Vol. 34, No. 1).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Ma, L, Gao, J, Wang, Y, Wang, J, Ruan, W, Zhang, C, Tang, W, Gao, X & Ma, X 2020, AdaCare: Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration. in The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. Proceedings of the AAAI Conference on Artificial Intelligence, no. 1, vol. 34, AAAI, pp. 825-832. https://doi.org/10.1609/aaai.v34i01.5427

APA

Ma, L., Gao, J., Wang, Y., Wang, J., Ruan, W., Zhang, C., Tang, W., Gao, X., & Ma, X. (2020). AdaCare: Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020 (pp. 825-832). (Proceedings of the AAAI Conference on Artificial Intelligence; Vol. 34, No. 1). AAAI. https://doi.org/10.1609/aaai.v34i01.5427

Vancouver

Ma L, Gao J, Wang Y, Wang J, Ruan W, Zhang C et al. AdaCare: Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. AAAI. 2020. p. 825-832. (Proceedings of the AAAI Conference on Artificial Intelligence; 1). doi: 10.1609/aaai.v34i01.5427

Author

Ma, Liantao ; Gao, Junyi ; Wang, Yasha et al. / AdaCare : Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration. The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. AAAI, 2020. pp. 825-832 (Proceedings of the AAAI Conference on Artificial Intelligence; 1).

Bibtex

@inproceedings{1c30961316ce4cf49ab454a6a7a669d7,
title = "AdaCare: Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration",
abstract = "Deep learning-based health status representation learning and clinical prediction have raised much research interest in recent years. Existing models have shown superior performance, but there are still several major issues that have not been fully taken into consideration. First, the historical variation pattern of the biomarker in diverse time scales plays an important role in indicating the health status, but it has not been explicitly extracted by existing works. Second, key factors that strongly indicate the health risk are different among patients. It is still challenging to adaptively make use of the features for patients in diverse conditions. Third, using the prediction model as a black box will limit the reliability in clinical practice. However, none of the existing works can provide satisfying interpretability and meanwhile achieve high prediction performance. In this work, we develop a general health status representation learning model, named AdaCare. It can capture the long and short-term variations of biomarkers as clinical features to depict the health status in multiple time scales. It also models the correlation between clinical features to enhance the ones which strongly indicate the health status and thus can maintain a state-of-the-art performance in terms of prediction accuracy while providing qualitative in- interpretability. We conduct health risk prediction experiment on two real-world datasets. Experiment results indicate that AdaCare outperforms state-of-the-art approaches and provides effective interpretability which is verifiable by clinical experts.",
author = "Liantao Ma and Junyi Gao and Yasha Wang and Jiangtao Wang and Wenjie Ruan and Chaohe Zhang and Wen Tang and Xin Gao and Xinyu Ma",
year = "2020",
month = apr,
day = "3",
doi = "10.1609/aaai.v34i01.5427",
language = "English",
isbn = " 9781577358350",
series = "Proceedings of the AAAI Conference on Artificial Intelligence",
publisher = "AAAI",
number = "1",
pages = "825--832",
booktitle = "The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020",

}

RIS

TY - GEN

T1 - AdaCare

T2 - Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration

AU - Ma, Liantao

AU - Gao, Junyi

AU - Wang, Yasha

AU - Wang, Jiangtao

AU - Ruan, Wenjie

AU - Zhang, Chaohe

AU - Tang, Wen

AU - Gao, Xin

AU - Ma, Xinyu

PY - 2020/4/3

Y1 - 2020/4/3

N2 - Deep learning-based health status representation learning and clinical prediction have raised much research interest in recent years. Existing models have shown superior performance, but there are still several major issues that have not been fully taken into consideration. First, the historical variation pattern of the biomarker in diverse time scales plays an important role in indicating the health status, but it has not been explicitly extracted by existing works. Second, key factors that strongly indicate the health risk are different among patients. It is still challenging to adaptively make use of the features for patients in diverse conditions. Third, using the prediction model as a black box will limit the reliability in clinical practice. However, none of the existing works can provide satisfying interpretability and meanwhile achieve high prediction performance. In this work, we develop a general health status representation learning model, named AdaCare. It can capture the long and short-term variations of biomarkers as clinical features to depict the health status in multiple time scales. It also models the correlation between clinical features to enhance the ones which strongly indicate the health status and thus can maintain a state-of-the-art performance in terms of prediction accuracy while providing qualitative in- interpretability. We conduct health risk prediction experiment on two real-world datasets. Experiment results indicate that AdaCare outperforms state-of-the-art approaches and provides effective interpretability which is verifiable by clinical experts.

AB - Deep learning-based health status representation learning and clinical prediction have raised much research interest in recent years. Existing models have shown superior performance, but there are still several major issues that have not been fully taken into consideration. First, the historical variation pattern of the biomarker in diverse time scales plays an important role in indicating the health status, but it has not been explicitly extracted by existing works. Second, key factors that strongly indicate the health risk are different among patients. It is still challenging to adaptively make use of the features for patients in diverse conditions. Third, using the prediction model as a black box will limit the reliability in clinical practice. However, none of the existing works can provide satisfying interpretability and meanwhile achieve high prediction performance. In this work, we develop a general health status representation learning model, named AdaCare. It can capture the long and short-term variations of biomarkers as clinical features to depict the health status in multiple time scales. It also models the correlation between clinical features to enhance the ones which strongly indicate the health status and thus can maintain a state-of-the-art performance in terms of prediction accuracy while providing qualitative in- interpretability. We conduct health risk prediction experiment on two real-world datasets. Experiment results indicate that AdaCare outperforms state-of-the-art approaches and provides effective interpretability which is verifiable by clinical experts.

U2 - 10.1609/aaai.v34i01.5427

DO - 10.1609/aaai.v34i01.5427

M3 - Conference contribution/Paper

SN - 9781577358350

T3 - Proceedings of the AAAI Conference on Artificial Intelligence

SP - 825

EP - 832

BT - The Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

PB - AAAI

ER -