Home > Research > Publications & Outputs > Federated Meta Learning for Visual Navigation i...

Electronic data

View graph of relations

Federated Meta Learning for Visual Navigation in GPS-denied Urban Airspace

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Forthcoming
Close
Publication date22/04/2023
<mark>Original language</mark>English
Event42nd AIAA/IEEE Digital Avionics Systems Conference (DASC) - Barcelona, Spain
Duration: 1/10/20235/10/2023
https://2023.dasconline.org

Conference

Conference42nd AIAA/IEEE Digital Avionics Systems Conference (DASC)
Country/TerritorySpain
CityBarcelona
Period1/10/235/10/23
Internet address

Abstract

Urban air mobility (UAM) is one of the most critical research areas which combines vehicle technology, infrastructure, communication, and air traffic management topics within its identical and novel requirement set. Navigation system requirements have become much more important to perform safe operations in urban environments in which these systems are vulnerable to cyber-attacks. Although the global navigation satellite system (GNSS) is a state-of-the-art solution to obtain position, navigation, and timing (PNT) information, it is necessary to design a redundant and GNSS-independent navigation system to support the localization process in GNSS-denied conditions. Recently, Artificial intelligence (AI)-based visual navigation solutions are widely used because of their robustness against challenging conditions such as low-texture and low-illumination situations. However, they have weak adaptability to new environments if the size of the dataset is not sufficient to train and validate the system. To address these problems, federated meta learning can help fast adaptation to new operation conditions with small dataset, but different visual sensor characteristics and adversarial attacks add considerable complexity in utilizing federated meta learning for navigation. Therefore, we proposed a robust-by-design Federated Meta Learning based visual odometry algorithm to improve pose estimation accuracy, dynamically adapt to various environments by using differentiable meta models and tunning its architecture to defense against cyber-attacks on the image data. In this proposed method, multiple learning loops (inner-loop and outer-loop) are dynamically generated. Each vehicle utilizes its collected visual data in different flight conditions to train its own neural network locally for a particular condition in the inner loops. Then, vehicles collaboratively train a global model in the outer loop which has generalizability across heterogeneous vehicles to enable lifelong learning. The inner loop is used to train a task-specific model based on local data, and the outer loop is to extract common features from similar tasks and optimize meta-model adaptability of similar tasks in navigation. Moreover, a detection model is designed by utilizing key characteristics in trained neural network model parameters to identify attacks.