Home > Research > Publications & Outputs > IG-GAN

Electronic data

  • TGRS_IGGAN (1)

    Accepted author manuscript, 6.7 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

IG-GAN: Interactive Guided Generative Adversarial Networks for Multimodal Image Fusion

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
Close
<mark>Journal publication date</mark>31/12/2024
<mark>Journal</mark>IEEE Transactions on Geoscience and Remote Sensing
Volume62
Publication StatusE-pub ahead of print
Early online date25/07/24
<mark>Original language</mark>English

Abstract

Multimodal image fusion has recently garnered increasing interest in the field of remote sensing. By leveraging the complementary information in different modalities, the fused results may be more favorable in characterizing objects of interest, thereby increasing the chance of a more comprehensive and accurate perception of the scene. Unfortunately, most existing fusion methods tend to extract modality-specific features independently without considering inter-modal alignment and complementarity, leading to a suboptimal fusion process. To address this issue, we propose a novel interactive guided generative adversarial network, named IG-GAN, for the task of multi-modal image fusion. IG-GAN comprises guided dual streams tailored for enhanced learning of details and content, as well as cross-modal consistency. Specifically, a details-guided interactive running-in module and a content-guided interactive running-in module are developed, with the stronger modality serving as guidance for detail richness or content integrity, and the weaker one assisting. To fully integrate multi-granularity features from dual-modality, a hierarchical fusion and reconstruction branch is established. Specifically, a shallow interactive fusion module followed by a multi-level interactive fusion module is designed to aggregate multi-level local and long-range features. Concerning feature decoding and fused image generation, a high-level interactive fusion and reconstruction module is further developed. Additionally, to empower the fusion network to generate fused images with complete content, sharp edges, and high fidelity without supervision, a loss function facilitating the mutual game between the generator and two discriminators is also formulated. Comparative experiments with fourteen state-of-the-art methods are conducted on three datasets. Qualitative and quantitative results indicate that IG-GAN exhibits obvious superiority in terms of both visual effect and quantitative metrics. Moreover, experiments on two RGB-IR object detection datasets are also conducted, which demonstrate that IG-GAN can enhance the accuracy of object detection by integrating complementary information from different modalities.The code will be available at https://github.com/flower6top.