Home > Research > Publications & Outputs > Boosting visual servoing performance through RG...

Electronic data

  • RIA_submission-2

    Accepted author manuscript, 3.3 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Boosting visual servoing performance through RGB-based methods

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Boosting visual servoing performance through RGB-based methods. / Fei, Haolin; Wang, Ziwei; Tedeschi, Stefano et al.
In: Robotic Intelligence and Automation, Vol. 43, No. 4, 21.08.2023, p. 468-475.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Fei, H, Wang, Z, Tedeschi, S & Kennedy, A 2023, 'Boosting visual servoing performance through RGB-based methods', Robotic Intelligence and Automation, vol. 43, no. 4, pp. 468-475. https://doi.org/10.1108/RIA-04-2023-0049

APA

Vancouver

Fei H, Wang Z, Tedeschi S, Kennedy A. Boosting visual servoing performance through RGB-based methods. Robotic Intelligence and Automation. 2023 Aug 21;43(4):468-475. Epub 2023 Jul 13. doi: 10.1108/RIA-04-2023-0049

Author

Fei, Haolin ; Wang, Ziwei ; Tedeschi, Stefano et al. / Boosting visual servoing performance through RGB-based methods. In: Robotic Intelligence and Automation. 2023 ; Vol. 43, No. 4. pp. 468-475.

Bibtex

@article{973c17cc107f4ae59e817bc233f3d6bc,
title = "Boosting visual servoing performance through RGB-based methods",
abstract = "PurposeThis paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.Design/methodology/approachThe authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.FindingsThe experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.Originality/valueThis paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.",
author = "Haolin Fei and Ziwei Wang and Stefano Tedeschi and Andrew Kennedy",
year = "2023",
month = aug,
day = "21",
doi = "10.1108/RIA-04-2023-0049",
language = "English",
volume = "43",
pages = "468--475",
journal = "Robotic Intelligence and Automation",
number = "4",

}

RIS

TY - JOUR

T1 - Boosting visual servoing performance through RGB-based methods

AU - Fei, Haolin

AU - Wang, Ziwei

AU - Tedeschi, Stefano

AU - Kennedy, Andrew

PY - 2023/8/21

Y1 - 2023/8/21

N2 - PurposeThis paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.Design/methodology/approachThe authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.FindingsThe experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.Originality/valueThis paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.

AB - PurposeThis paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.Design/methodology/approachThe authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.FindingsThe experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.Originality/valueThis paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.

U2 - 10.1108/RIA-04-2023-0049

DO - 10.1108/RIA-04-2023-0049

M3 - Journal article

VL - 43

SP - 468

EP - 475

JO - Robotic Intelligence and Automation

JF - Robotic Intelligence and Automation

IS - 4

ER -