Home > Research > Publications & Outputs > Single Image Super-Resolution Using Multi-Scale...

Electronic data

  • elsarticle-template-num

    Rights statement: This is the author’s version of a work that was accepted for publication in Information Sciences. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Sciences, 473, 2019 DOI: 10.1016/j.ins.2018.09.018

    Accepted author manuscript, 9.56 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

Single Image Super-Resolution Using Multi-Scale Deep Encoder-Decoder with Phase Congruency Edge Map Guidance

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Heng Liu
  • Fu Zilin
  • Jungong Han
  • Ling Shao
  • Hou Shudong
  • Chu Yuezhong
Close
<mark>Journal publication date</mark>01/2019
<mark>Journal</mark>Information Sciences
Volume473
Number of pages15
Pages (from-to)44-58
Publication StatusPublished
Early online date18/09/18
<mark>Original language</mark>English

Abstract

This paper presents an end-to-end multi-scale deep encoder (convolution) and decoder (deconvolution) network for single image super-resolution (SISR) guided by phase congruency (PC) edge map. Our system starts by a single scale symmetrical encoder–decoder structure for SISR, which is extended to a multi-scale model by integrating wavelet multi-resolution analysis into our network. The new multi-scale deep learning system allows the low resolution (LR) input and its PC edge map to be combined so as to precisely predict the multi-scale super-resolved edge details with the guidance of the high-resolution (HR) PC edge map. In this way, the proposed deep model takes both the reconstruction of image pixels’ intensities and the recovery of multi-scale edge details into consideration under the same framework. We evaluate the proposed model on benchmark datasets of different data scenarios, such as Set14 and BSD100 - natural images, Middlebury and New Tsukuba - depth images. The evaluations based on both PSNR and visual perception reveal that the proposed model is superior to the state-of-the-art methods.

Bibliographic note

This is the author’s version of a work that was accepted for publication in Information Sciences. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Sciences, 473, 2019 DOI: 10.1016/j.ins.2018.09.018