Home > Research > Publications & Outputs > Issues in Vision, Semi-autonomous Control, Hapt...

Electronic data

  • revised manuscript 3 - Soil inositol phosphates

    Accepted author manuscript, 871 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

  • RAEJ.MS.ID.555635

    Final published version, 297 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Issues in Vision, Semi-autonomous Control, Haptics and Manipulation in Robotics for Nuclear Decommissioning

Research output: Contribution to Journal/MagazineReview articlepeer-review

Published
<mark>Journal publication date</mark>26/03/2019
<mark>Journal</mark>Robotics & Automation Engineering Journal
Issue number2
Volume4
Number of pages4
Pages (from-to)1-4
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Traditionally, the nuclear industry has preferred the use of tele-operated control within robotic applications such as decommissioning. This is due to obvious safety reasons, along with other less apparent motivations such as the safeguarding of industry jobs, and a lack of coding expertise in the industry. However, problems with the use of such techniques have been evident within the past few years, mostly associated with operator fatigue leading to errors. A typical modern autonomous robotic system will utilise some sort of stereoscopic 3D vision system (often based on LIDAR) to aid recognition. However, this information can be hard to relay to a human tele-operator not used to such information. Further, tele-operation of a modern robot is a truly complex and specialised skill, and there are a lack of people with in the nuclear industry (and indeed industry as a whole) with these skills. A potential solution to alleviate these problems may be in the use of semi-autonomous control where the robotic artificial intelligence may be used for low-level tasks while the human operator would handle the higher-level decisions. Instead of the operator having to directly control the robot via two joysticks, the operator would be more likely to be confronted with a large touchscreen complete with a list of tasks and highlighted objects on which the robot can perform them upon.