Accepted author manuscript, 6.7 MB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - IG-GAN
T2 - Interactive Guided Generative Adversarial Networks for Multimodal Image Fusion
AU - Sui, Chenhong
AU - Yang, Guobin
AU - Hong, Danfeng
AU - Wang, Haipeng
AU - Yao, Jing
AU - Atkinson, Peter M
AU - Ghamisi, Pedram
PY - 2024/12/31
Y1 - 2024/12/31
N2 - Multimodal image fusion has recently garnered increasing interest in the field of remote sensing. By leveraging the complementary information in different modalities, the fused results may be more favorable in characterizing objects of interest, thereby increasing the chance of a more comprehensive and accurate perception of the scene. Unfortunately, most existing fusion methods tend to extract modality-specific features independently without considering inter-modal alignment and complementarity, leading to a suboptimal fusion process. To address this issue, we propose a novel interactive guided generative adversarial network, named IG-GAN, for the task of multi-modal image fusion. IG-GAN comprises guided dual streams tailored for enhanced learning of details and content, as well as cross-modal consistency. Specifically, a details-guided interactive running-in module and a content-guided interactive running-in module are developed, with the stronger modality serving as guidance for detail richness or content integrity, and the weaker one assisting. To fully integrate multi-granularity features from dual-modality, a hierarchical fusion and reconstruction branch is established. Specifically, a shallow interactive fusion module followed by a multi-level interactive fusion module is designed to aggregate multi-level local and long-range features. Concerning feature decoding and fused image generation, a high-level interactive fusion and reconstruction module is further developed. Additionally, to empower the fusion network to generate fused images with complete content, sharp edges, and high fidelity without supervision, a loss function facilitating the mutual game between the generator and two discriminators is also formulated. Comparative experiments with fourteen state-of-the-art methods are conducted on three datasets. Qualitative and quantitative results indicate that IG-GAN exhibits obvious superiority in terms of both visual effect and quantitative metrics. Moreover, experiments on two RGB-IR object detection datasets are also conducted, which demonstrate that IG-GAN can enhance the accuracy of object detection by integrating complementary information from different modalities.The code will be available at https://github.com/flower6top.
AB - Multimodal image fusion has recently garnered increasing interest in the field of remote sensing. By leveraging the complementary information in different modalities, the fused results may be more favorable in characterizing objects of interest, thereby increasing the chance of a more comprehensive and accurate perception of the scene. Unfortunately, most existing fusion methods tend to extract modality-specific features independently without considering inter-modal alignment and complementarity, leading to a suboptimal fusion process. To address this issue, we propose a novel interactive guided generative adversarial network, named IG-GAN, for the task of multi-modal image fusion. IG-GAN comprises guided dual streams tailored for enhanced learning of details and content, as well as cross-modal consistency. Specifically, a details-guided interactive running-in module and a content-guided interactive running-in module are developed, with the stronger modality serving as guidance for detail richness or content integrity, and the weaker one assisting. To fully integrate multi-granularity features from dual-modality, a hierarchical fusion and reconstruction branch is established. Specifically, a shallow interactive fusion module followed by a multi-level interactive fusion module is designed to aggregate multi-level local and long-range features. Concerning feature decoding and fused image generation, a high-level interactive fusion and reconstruction module is further developed. Additionally, to empower the fusion network to generate fused images with complete content, sharp edges, and high fidelity without supervision, a loss function facilitating the mutual game between the generator and two discriminators is also formulated. Comparative experiments with fourteen state-of-the-art methods are conducted on three datasets. Qualitative and quantitative results indicate that IG-GAN exhibits obvious superiority in terms of both visual effect and quantitative metrics. Moreover, experiments on two RGB-IR object detection datasets are also conducted, which demonstrate that IG-GAN can enhance the accuracy of object detection by integrating complementary information from different modalities.The code will be available at https://github.com/flower6top.
U2 - 10.1109/tgrs.2024.3433619
DO - 10.1109/tgrs.2024.3433619
M3 - Journal article
VL - 62
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
SN - 0196-2892
ER -