Home > Research > Publications & Outputs > A Novel Training and Collaboration Integrated F...

Links

Text available via DOI:

View graph of relations

A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation. / Huang, Zebin; Wang, Ziwei; Bai, Weibang et al.
In: Sensors, Vol. 21, No. 24, 8341, 14.12.2021.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Huang, Z, Wang, Z, Bai, W, Huang, Y, Sun, L, Xiao, B & Yeatman, EM 2021, 'A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation', Sensors, vol. 21, no. 24, 8341. https://doi.org/10.3390/s21248341

APA

Huang, Z., Wang, Z., Bai, W., Huang, Y., Sun, L., Xiao, B., & Yeatman, E. M. (2021). A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation. Sensors, 21(24), Article 8341. https://doi.org/10.3390/s21248341

Vancouver

Huang Z, Wang Z, Bai W, Huang Y, Sun L, Xiao B et al. A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation. Sensors. 2021 Dec 14;21(24):8341. doi: 10.3390/s21248341

Author

Huang, Zebin ; Wang, Ziwei ; Bai, Weibang et al. / A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation. In: Sensors. 2021 ; Vol. 21, No. 24.

Bibtex

@article{e4c6a74a1079419095b6503bfb68ff4b,
title = "A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation",
abstract = "Human operators have the trend of increasing physical and mental workloads when performing teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human–agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human–human and human–agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human–human cooperation.",
author = "Zebin Huang and Ziwei Wang and Weibang Bai and Yanpei Huang and Lichao Sun and Bo Xiao and Yeatman, {Eric M.}",
year = "2021",
month = dec,
day = "14",
doi = "10.3390/s21248341",
language = "English",
volume = "21",
journal = "Sensors",
issn = "1424-8220",
publisher = "Multidisciplinary Digital Publishing Institute (MDPI)",
number = "24",

}

RIS

TY - JOUR

T1 - A Novel Training and Collaboration Integrated Framework for Human-Agent Teleoperation

AU - Huang, Zebin

AU - Wang, Ziwei

AU - Bai, Weibang

AU - Huang, Yanpei

AU - Sun, Lichao

AU - Xiao, Bo

AU - Yeatman, Eric M.

PY - 2021/12/14

Y1 - 2021/12/14

N2 - Human operators have the trend of increasing physical and mental workloads when performing teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human–agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human–human and human–agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human–human cooperation.

AB - Human operators have the trend of increasing physical and mental workloads when performing teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human–agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human–human and human–agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human–human cooperation.

U2 - 10.3390/s21248341

DO - 10.3390/s21248341

M3 - Journal article

VL - 21

JO - Sensors

JF - Sensors

SN - 1424-8220

IS - 24

M1 - 8341

ER -