Home > Research > Publications & Outputs > Single image dehazing using deep neural networks

Electronic data

  • Single_image_dehazing_using_deep_Neural_Networks__mine_

    Rights statement: This is the author’s version of a work that was accepted for publication in Pattern Recognition Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Letters, 128, 2019 DOI: 10.1016/j.patrec.2019.08.013

    Accepted author manuscript, 18.9 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

Single image dehazing using deep neural networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>1/12/2019
<mark>Journal</mark>Pattern Recognition Letters
Volume128
Number of pages8
Pages (from-to)70-77
Publication StatusPublished
Early online date16/08/19
<mark>Original language</mark>English

Abstract

The rapid growth in computer vision applications that are affected by environmental conditions challenge the limitations of existing techniques. This is driving the development of new deep learning based vision techniques that are robust to environmental noise and interference. We propose a novel deep CNN model, which is trained from unmatched images for the purpose of image dehazing. This solution is enabled by the concept of the Siamese network architecture. Using object performance measures of image PSNR and SSIM we are able to demonstrate a quantitative and qualitative improvement in the network dehazing performance. This superior performance is achieved with significantly smaller training datasets than existing methods.

Bibliographic note

This is the author’s version of a work that was accepted for publication in Pattern Recognition Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Letters, 128, 2019 DOI: 10.1016/j.patrec.2019.08.013