Home > Research > Publications & Outputs > Vision-based Landing Guidance through Tracking ...

Electronic data

  • WACV_2025_LARD

    Accepted author manuscript, 1 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Vision-based Landing Guidance through Tracking and Orientation Estimation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming
Close
Publication date28/10/2024
Host publicationIEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
ISBN (electronic)9798350318920
<mark>Original language</mark>English

Abstract

Fixed-wing aerial vehicles are equipped with functionalities such as ILS (instrument landing system), PAR (precision approach radar) and, DGPS (differential global positioning system), enabling fully automated landings. However, these systems impose significant costs on airport operations due to high installation and maintenance requirements. Moreover, since these navigation parameters come from ground or satellite signals, they are vulnerable to interference. A more cost-effective and independent alternative for guiding landing is a vision-based system that detects the runway and aligns the aircraft, reducing the pilot’s cognitive load. This paper proposes a novel framework that addresses three key challenges in developing autonomous vision-based landing systems. Firstly, to overcome the lack of aerial front-view video data, we created high-quality videos simulating landing approaches through the generator code available in the LARD (landing approach runway detection dataset) repository. Secondly, in contrast to former studies focusing on object detection for finding the runway, we chose the state-of-the-art model LoRAT to track runways within bounding boxes in each video frame.
Thirdly, to align the aircraft with the designated landing runway, we extract runway keypoints from the resulting LoRAT frames and estimate the camera relative pose via the Perspective-n-Point algorithm. Our experimental results over a dataset of generated videos and original images from the LARD dataset consistently demonstrate the proposed framework’s highly accurate tracking and alignment capabilities. Our approach source code and the LoRAT model pre-trained with LARD videos are available at https:// github.com/ jpklock2/ visionbased-landing-guidance