Home > Research > Publications & Outputs > Robust sparse representation based multi-focus ...

Electronic data

  • 1-s2.0-S0031320318302139-main

    Rights statement: This is the author’s version of a work that was accepted for publication in Pattern Recognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition, 83, 2018 DOI: 10.1016/j.patcog.2018.06.003

    Accepted author manuscript, 1.86 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency. / Zhang, Qiang; Shi, Tao; Wang, Fan et al.
In: Pattern Recognition, Vol. 83, 11.2018, p. 299-313.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Zhang Q, Shi T, Wang F, Blum RS, Han J. Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency. Pattern Recognition. 2018 Nov;83:299-313. Epub 2018 Jun 6. doi: 10.1016/j.patcog.2018.06.003

Author

Zhang, Qiang ; Shi, Tao ; Wang, Fan et al. / Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency. In: Pattern Recognition. 2018 ; Vol. 83. pp. 299-313.

Bibtex

@article{d625968b5a1d4f86b2dcb29780caa317,
title = "Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency",
abstract = "Recently, sparse representation-based (SR) methods have been presented for the fusion of multi-focus images. However, most of them independently consider the local information from each image patch during sparse coding and fusion, giving rise to the spatial artifacts on the fused image. In order to overcome this issue, we present a novel multi-focus image fusion method by jointly considering information from each local image patch as well as its spatial contextual information during the sparse coding and fusion in this paper. Specifically, we employ a robust sparse representation (LR_RSR, for short) model with a Laplacian regularization term on the sparse error matrix in the sparse coding phase, ensuring the local consistency among the spatially-adjacent image patches. In the subsequent fusion process, we define a focus measure to determine the focused and de-focused regions in the multi-focus images by collaboratively employing information from each local image patch as well as those from its 8-connected spatial neighbors. As a result of that, the proposed method is likely to introduce fewer spatial artifacts to the fused image. Moreover, an over-complete dictionary with small atoms that maintains good representation capability, rather than using the input data themselves, is constructed for the LR_RSR model during sparse coding. By doing that, the computational complexity of the proposed fusion method is greatly reduced, while the fusion performance is not degraded and can be even slightly improved. Experimental results demonstrate the validity of the proposed method, and more importantly, it turns out that our LR-RSR algorithm is more computationally efficient than most of the traditional SR-based fusion methods.",
keywords = "Multi-focus image fusion, Robust sparse representation, Dictionary construction, Spatial contextual information, Spatial consistency",
author = "Qiang Zhang and Tao Shi and Fan Wang and Blum, {Rick S.} and Jungong Han",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Pattern Recognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition, 83, 2018 DOI: 10.1016/j.patcog.2018.06.003",
year = "2018",
month = nov,
doi = "10.1016/j.patcog.2018.06.003",
language = "English",
volume = "83",
pages = "299--313",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier Ltd",

}

RIS

TY - JOUR

T1 - Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency

AU - Zhang, Qiang

AU - Shi, Tao

AU - Wang, Fan

AU - Blum, Rick S.

AU - Han, Jungong

N1 - This is the author’s version of a work that was accepted for publication in Pattern Recognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition, 83, 2018 DOI: 10.1016/j.patcog.2018.06.003

PY - 2018/11

Y1 - 2018/11

N2 - Recently, sparse representation-based (SR) methods have been presented for the fusion of multi-focus images. However, most of them independently consider the local information from each image patch during sparse coding and fusion, giving rise to the spatial artifacts on the fused image. In order to overcome this issue, we present a novel multi-focus image fusion method by jointly considering information from each local image patch as well as its spatial contextual information during the sparse coding and fusion in this paper. Specifically, we employ a robust sparse representation (LR_RSR, for short) model with a Laplacian regularization term on the sparse error matrix in the sparse coding phase, ensuring the local consistency among the spatially-adjacent image patches. In the subsequent fusion process, we define a focus measure to determine the focused and de-focused regions in the multi-focus images by collaboratively employing information from each local image patch as well as those from its 8-connected spatial neighbors. As a result of that, the proposed method is likely to introduce fewer spatial artifacts to the fused image. Moreover, an over-complete dictionary with small atoms that maintains good representation capability, rather than using the input data themselves, is constructed for the LR_RSR model during sparse coding. By doing that, the computational complexity of the proposed fusion method is greatly reduced, while the fusion performance is not degraded and can be even slightly improved. Experimental results demonstrate the validity of the proposed method, and more importantly, it turns out that our LR-RSR algorithm is more computationally efficient than most of the traditional SR-based fusion methods.

AB - Recently, sparse representation-based (SR) methods have been presented for the fusion of multi-focus images. However, most of them independently consider the local information from each image patch during sparse coding and fusion, giving rise to the spatial artifacts on the fused image. In order to overcome this issue, we present a novel multi-focus image fusion method by jointly considering information from each local image patch as well as its spatial contextual information during the sparse coding and fusion in this paper. Specifically, we employ a robust sparse representation (LR_RSR, for short) model with a Laplacian regularization term on the sparse error matrix in the sparse coding phase, ensuring the local consistency among the spatially-adjacent image patches. In the subsequent fusion process, we define a focus measure to determine the focused and de-focused regions in the multi-focus images by collaboratively employing information from each local image patch as well as those from its 8-connected spatial neighbors. As a result of that, the proposed method is likely to introduce fewer spatial artifacts to the fused image. Moreover, an over-complete dictionary with small atoms that maintains good representation capability, rather than using the input data themselves, is constructed for the LR_RSR model during sparse coding. By doing that, the computational complexity of the proposed fusion method is greatly reduced, while the fusion performance is not degraded and can be even slightly improved. Experimental results demonstrate the validity of the proposed method, and more importantly, it turns out that our LR-RSR algorithm is more computationally efficient than most of the traditional SR-based fusion methods.

KW - Multi-focus image fusion

KW - Robust sparse representation

KW - Dictionary construction

KW - Spatial contextual information

KW - Spatial consistency

U2 - 10.1016/j.patcog.2018.06.003

DO - 10.1016/j.patcog.2018.06.003

M3 - Journal article

VL - 83

SP - 299

EP - 313

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

ER -