Home > Research > Publications & Outputs > Vision-based Landing Guidance through Tracking ...

Electronic data

  • WACV_2025_LARD

    Accepted author manuscript, 1 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Vision-based Landing Guidance through Tracking and Orientation Estimation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming

Standard

Vision-based Landing Guidance through Tracking and Orientation Estimation. / Ferreira, João; Pinto, João; Moura, Júlia et al.
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 2024.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Ferreira, J, Pinto, J, Moura, J, Li, Y, Castro, C & Angelov, P 2024, Vision-based Landing Guidance through Tracking and Orientation Estimation. in IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

APA

Ferreira, J., Pinto, J., Moura, J., Li, Y., Castro, C., & Angelov, P. (in press). Vision-based Landing Guidance through Tracking and Orientation Estimation. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)

Vancouver

Ferreira J, Pinto J, Moura J, Li Y, Castro C, Angelov P. Vision-based Landing Guidance through Tracking and Orientation Estimation. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 2024

Author

Ferreira, João ; Pinto, João ; Moura, Júlia et al. / Vision-based Landing Guidance through Tracking and Orientation Estimation. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 2024.

Bibtex

@inproceedings{4aab151ea58148c1abd46c777f41aa9e,
title = "Vision-based Landing Guidance through Tracking and Orientation Estimation",
abstract = "Fixed-wing aerial vehicles are equipped with functionalities such as ILS (instrument landing system), PAR (precision approach radar) and, DGPS (differential global positioning system), enabling fully automated landings. However, these systems impose significant costs on airport operations due to high installation and maintenance requirements. Moreover, since these navigation parameters come from ground or satellite signals, they are vulnerable to interference. A more cost-effective and independent alternative for guiding landing is a vision-based system that detects the runway and aligns the aircraft, reducing the pilot{\textquoteright}s cognitive load. This paper proposes a novel framework that addresses three key challenges in developing autonomous vision-based landing systems. Firstly, to overcome the lack of aerial front-view video data, we created high-quality videos simulating landing approaches through the generator code available in the LARD (landing approach runway detection dataset) repository. Secondly, in contrast to former studies focusing on object detection for finding the runway, we chose the state-of-the-art model LoRAT to track runways within bounding boxes in each video frame. Thirdly, to align the aircraft with the designated landing runway, we extract runway keypoints from the resulting LoRAT frames and estimate the camera relative pose via the Perspective-n-Point algorithm. Our experimental results over a dataset of generated videos and original images from the LARD dataset consistently demonstrate the proposed framework{\textquoteright}s highly accurate tracking and alignment capabilities. Our approach source code and the LoRAT model pre-trained with LARD videos are available at https:// github.com/ jpklock2/ visionbased-landing-guidance",
author = "Jo{\~a}o Ferreira and Jo{\~a}o Pinto and J{\'u}lia Moura and Yi Li and Cristiano Castro and Plamen Angelov",
year = "2024",
month = oct,
day = "28",
language = "English",
booktitle = "IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)",

}

RIS

TY - GEN

T1 - Vision-based Landing Guidance through Tracking and Orientation Estimation

AU - Ferreira, João

AU - Pinto, João

AU - Moura, Júlia

AU - Li, Yi

AU - Castro, Cristiano

AU - Angelov, Plamen

PY - 2024/10/28

Y1 - 2024/10/28

N2 - Fixed-wing aerial vehicles are equipped with functionalities such as ILS (instrument landing system), PAR (precision approach radar) and, DGPS (differential global positioning system), enabling fully automated landings. However, these systems impose significant costs on airport operations due to high installation and maintenance requirements. Moreover, since these navigation parameters come from ground or satellite signals, they are vulnerable to interference. A more cost-effective and independent alternative for guiding landing is a vision-based system that detects the runway and aligns the aircraft, reducing the pilot’s cognitive load. This paper proposes a novel framework that addresses three key challenges in developing autonomous vision-based landing systems. Firstly, to overcome the lack of aerial front-view video data, we created high-quality videos simulating landing approaches through the generator code available in the LARD (landing approach runway detection dataset) repository. Secondly, in contrast to former studies focusing on object detection for finding the runway, we chose the state-of-the-art model LoRAT to track runways within bounding boxes in each video frame. Thirdly, to align the aircraft with the designated landing runway, we extract runway keypoints from the resulting LoRAT frames and estimate the camera relative pose via the Perspective-n-Point algorithm. Our experimental results over a dataset of generated videos and original images from the LARD dataset consistently demonstrate the proposed framework’s highly accurate tracking and alignment capabilities. Our approach source code and the LoRAT model pre-trained with LARD videos are available at https:// github.com/ jpklock2/ visionbased-landing-guidance

AB - Fixed-wing aerial vehicles are equipped with functionalities such as ILS (instrument landing system), PAR (precision approach radar) and, DGPS (differential global positioning system), enabling fully automated landings. However, these systems impose significant costs on airport operations due to high installation and maintenance requirements. Moreover, since these navigation parameters come from ground or satellite signals, they are vulnerable to interference. A more cost-effective and independent alternative for guiding landing is a vision-based system that detects the runway and aligns the aircraft, reducing the pilot’s cognitive load. This paper proposes a novel framework that addresses three key challenges in developing autonomous vision-based landing systems. Firstly, to overcome the lack of aerial front-view video data, we created high-quality videos simulating landing approaches through the generator code available in the LARD (landing approach runway detection dataset) repository. Secondly, in contrast to former studies focusing on object detection for finding the runway, we chose the state-of-the-art model LoRAT to track runways within bounding boxes in each video frame. Thirdly, to align the aircraft with the designated landing runway, we extract runway keypoints from the resulting LoRAT frames and estimate the camera relative pose via the Perspective-n-Point algorithm. Our experimental results over a dataset of generated videos and original images from the LARD dataset consistently demonstrate the proposed framework’s highly accurate tracking and alignment capabilities. Our approach source code and the LoRAT model pre-trained with LARD videos are available at https:// github.com/ jpklock2/ visionbased-landing-guidance

M3 - Conference contribution/Paper

BT - IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)

ER -