Home > Research > Publications & Outputs > Sparse representation based multi-sensor image ...

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. / Zhang, Qiang; Liu, Yi; Blum, Rick S. et al.
In: Information Fusion, Vol. 40, 01.03.2018, p. 57-75.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Zhang Q, Liu Y, Blum RS, Han J, Tao D. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Information Fusion. 2018 Mar 1;40:57-75. Epub 2017 Jun 9. doi: 10.1016/j.inffus.2017.05.006

Author

Zhang, Qiang ; Liu, Yi ; Blum, Rick S. et al. / Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images : A review. In: Information Fusion. 2018 ; Vol. 40. pp. 57-75.

Bibtex

@article{caf7a2ae44564a89a2e79cd6b6764e75,
title = "Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review",
abstract = "As a result of several successful applications in computer vision and image processing, sparse representation (SR) has attracted significant attention in multi-sensor image fusion. Unlike the traditional multiscale transforms (MSTs) that presume the basis functions, SR learns an over-complete dictionary from a set of training images for image fusion, and it achieves more stable and meaningful representations of the source images. By doing so, the SR-based fusion methods generally outperform the traditional MST image fusion methods in both subjective and objective tests. In addition, they are less susceptible to mis-registration among the source images, thus facilitating the practical applications. This survey paper proposes a systematic review of the SR-based multi-sensor image fusion literature, highlighting the pros and cons of each category of approaches. Specifically, we start by performing a theoretical investigation of the entire system from three key algorithmic aspects, (1) sparse representation models; (2) dictionary learning methods; and (3) activity levels and fusion rules. Subsequently, we show how the existing works address these scientific problems and design the appropriate fusion rules for each application such as multi-focus image fusion and multi-modality (e.g., infrared and visible) image fusion. At last, we carry out some experiments to evaluate the impact of these three algorithmic components on the fusion performance when dealing with different applications. This article is expected to serve as a tutorial and source of reference for researchers preparing to enter the field or who desire to employ the sparse representation theory in other fields.",
keywords = "Image fusion, Sparse representation, Dictionary learning, Activity level",
author = "Qiang Zhang and Yi Liu and Blum, {Rick S.} and Jungong Han and Dacheng Tao",
year = "2018",
month = mar,
day = "1",
doi = "10.1016/j.inffus.2017.05.006",
language = "English",
volume = "40",
pages = "57--75",
journal = "Information Fusion",
issn = "1566-2535",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images

T2 - A review

AU - Zhang, Qiang

AU - Liu, Yi

AU - Blum, Rick S.

AU - Han, Jungong

AU - Tao, Dacheng

PY - 2018/3/1

Y1 - 2018/3/1

N2 - As a result of several successful applications in computer vision and image processing, sparse representation (SR) has attracted significant attention in multi-sensor image fusion. Unlike the traditional multiscale transforms (MSTs) that presume the basis functions, SR learns an over-complete dictionary from a set of training images for image fusion, and it achieves more stable and meaningful representations of the source images. By doing so, the SR-based fusion methods generally outperform the traditional MST image fusion methods in both subjective and objective tests. In addition, they are less susceptible to mis-registration among the source images, thus facilitating the practical applications. This survey paper proposes a systematic review of the SR-based multi-sensor image fusion literature, highlighting the pros and cons of each category of approaches. Specifically, we start by performing a theoretical investigation of the entire system from three key algorithmic aspects, (1) sparse representation models; (2) dictionary learning methods; and (3) activity levels and fusion rules. Subsequently, we show how the existing works address these scientific problems and design the appropriate fusion rules for each application such as multi-focus image fusion and multi-modality (e.g., infrared and visible) image fusion. At last, we carry out some experiments to evaluate the impact of these three algorithmic components on the fusion performance when dealing with different applications. This article is expected to serve as a tutorial and source of reference for researchers preparing to enter the field or who desire to employ the sparse representation theory in other fields.

AB - As a result of several successful applications in computer vision and image processing, sparse representation (SR) has attracted significant attention in multi-sensor image fusion. Unlike the traditional multiscale transforms (MSTs) that presume the basis functions, SR learns an over-complete dictionary from a set of training images for image fusion, and it achieves more stable and meaningful representations of the source images. By doing so, the SR-based fusion methods generally outperform the traditional MST image fusion methods in both subjective and objective tests. In addition, they are less susceptible to mis-registration among the source images, thus facilitating the practical applications. This survey paper proposes a systematic review of the SR-based multi-sensor image fusion literature, highlighting the pros and cons of each category of approaches. Specifically, we start by performing a theoretical investigation of the entire system from three key algorithmic aspects, (1) sparse representation models; (2) dictionary learning methods; and (3) activity levels and fusion rules. Subsequently, we show how the existing works address these scientific problems and design the appropriate fusion rules for each application such as multi-focus image fusion and multi-modality (e.g., infrared and visible) image fusion. At last, we carry out some experiments to evaluate the impact of these three algorithmic components on the fusion performance when dealing with different applications. This article is expected to serve as a tutorial and source of reference for researchers preparing to enter the field or who desire to employ the sparse representation theory in other fields.

KW - Image fusion

KW - Sparse representation

KW - Dictionary learning

KW - Activity level

U2 - 10.1016/j.inffus.2017.05.006

DO - 10.1016/j.inffus.2017.05.006

M3 - Journal article

VL - 40

SP - 57

EP - 75

JO - Information Fusion

JF - Information Fusion

SN - 1566-2535

ER -