Home > Research > Publications & Outputs > Meta Agent Teaming Active Learning for Pose Est...

Electronic data

Links

Text available via DOI:

View graph of relations

Meta Agent Teaming Active Learning for Pose Estimation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Meta Agent Teaming Active Learning for Pose Estimation. / Gong, Jia; Fan, Zhipeng; Ke, Qiuhong et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. p. 11069-11079 (2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Gong, J, Fan, Z, Ke, Q, Rahmani, H & Liu, J 2022, Meta Agent Teaming Active Learning for Pose Estimation. in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 11069-11079. https://doi.org/10.1109/CVPR52688.2022.01080

APA

Gong, J., Fan, Z., Ke, Q., Rahmani, H., & Liu, J. (2022). Meta Agent Teaming Active Learning for Pose Estimation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 11069-11079). (2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)). IEEE. https://doi.org/10.1109/CVPR52688.2022.01080

Vancouver

Gong J, Fan Z, Ke Q, Rahmani H, Liu J. Meta Agent Teaming Active Learning for Pose Estimation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. 2022. p. 11069-11079. (2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)). Epub 2022 Jun 24. doi: 10.1109/CVPR52688.2022.01080

Author

Gong, Jia ; Fan, Zhipeng ; Ke, Qiuhong et al. / Meta Agent Teaming Active Learning for Pose Estimation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. pp. 11069-11079 (2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)).

Bibtex

@inproceedings{9c0a8ec16cd4453787bb3a9b20221848,
title = "Meta Agent Teaming Active Learning for Pose Estimation",
abstract = "The existing pose estimation approaches often require a large number of annotated images to attain good estimation performance, which are laborious to acquire. To reduce the human efforts on pose annotations, we propose a novel Meta Agent Teaming Active Learning (MATAL) framework to actively select and label informative images for effective learning. Our MATAL formulates the image selection procedure as a Markov Decision Process and learns an optimal sampling policy that directly maximizes the performance of the pose estimator based on the reward. Our framework consists of a novel state-action representation as well as a multi-agent team to enable batch sampling in the active learning procedure. The framework could be effectively optimized via Meta-Optimization to accelerate the adaptation to the gradually expanded labeled data during deployment. Finally, we show experimental results on both human hand and body pose estimation benchmark datasets and demonstrate that our method significantly outperforms all baselines continuously under the same amount of annotation budget. Moreover, to obtain similar pose estimation accuracy, our MATAL framework can save around 40% labeling efforts on average compared to state-of-the-art active learning frameworks.",
author = "Jia Gong and Zhipeng Fan and Qiuhong Ke and Hossein Rahmani and Jun Liu",
year = "2022",
month = sep,
day = "27",
doi = "10.1109/CVPR52688.2022.01080",
language = "English",
isbn = "9781665469470",
series = "2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
publisher = "IEEE",
pages = "11069--11079",
booktitle = "2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",

}

RIS

TY - GEN

T1 - Meta Agent Teaming Active Learning for Pose Estimation

AU - Gong, Jia

AU - Fan, Zhipeng

AU - Ke, Qiuhong

AU - Rahmani, Hossein

AU - Liu, Jun

PY - 2022/9/27

Y1 - 2022/9/27

N2 - The existing pose estimation approaches often require a large number of annotated images to attain good estimation performance, which are laborious to acquire. To reduce the human efforts on pose annotations, we propose a novel Meta Agent Teaming Active Learning (MATAL) framework to actively select and label informative images for effective learning. Our MATAL formulates the image selection procedure as a Markov Decision Process and learns an optimal sampling policy that directly maximizes the performance of the pose estimator based on the reward. Our framework consists of a novel state-action representation as well as a multi-agent team to enable batch sampling in the active learning procedure. The framework could be effectively optimized via Meta-Optimization to accelerate the adaptation to the gradually expanded labeled data during deployment. Finally, we show experimental results on both human hand and body pose estimation benchmark datasets and demonstrate that our method significantly outperforms all baselines continuously under the same amount of annotation budget. Moreover, to obtain similar pose estimation accuracy, our MATAL framework can save around 40% labeling efforts on average compared to state-of-the-art active learning frameworks.

AB - The existing pose estimation approaches often require a large number of annotated images to attain good estimation performance, which are laborious to acquire. To reduce the human efforts on pose annotations, we propose a novel Meta Agent Teaming Active Learning (MATAL) framework to actively select and label informative images for effective learning. Our MATAL formulates the image selection procedure as a Markov Decision Process and learns an optimal sampling policy that directly maximizes the performance of the pose estimator based on the reward. Our framework consists of a novel state-action representation as well as a multi-agent team to enable batch sampling in the active learning procedure. The framework could be effectively optimized via Meta-Optimization to accelerate the adaptation to the gradually expanded labeled data during deployment. Finally, we show experimental results on both human hand and body pose estimation benchmark datasets and demonstrate that our method significantly outperforms all baselines continuously under the same amount of annotation budget. Moreover, to obtain similar pose estimation accuracy, our MATAL framework can save around 40% labeling efforts on average compared to state-of-the-art active learning frameworks.

U2 - 10.1109/CVPR52688.2022.01080

DO - 10.1109/CVPR52688.2022.01080

M3 - Conference contribution/Paper

SN - 9781665469470

T3 - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

SP - 11069

EP - 11079

BT - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

PB - IEEE

ER -