Home > Research > Publications & Outputs > Variant-Depth Neural Networks for Deblurring Tr...

Electronic data

Links

Text available via DOI:

View graph of relations

Variant-Depth Neural Networks for Deblurring Traffic Images in Intelligent Transportation Systems

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Qian Wang
  • Cai Guo
  • Hong-Ning Dai
  • Min Xia
Close
Article number6
<mark>Journal publication date</mark>1/06/2023
<mark>Journal</mark>IEEE Transactions on Intelligent Transportation Systems
Issue number6
Volume24
Number of pages11
Pages (from-to)5792-5802
Publication StatusPublished
Early online date4/04/23
<mark>Original language</mark>English

Abstract

Intelligent transportation systems (ITS) with surveillance cameras capture traffic images or videos. However, images or videos in ITS often encounter blurs due to various reasons. Considering resource limitations, although recent technologies make progress in image-deblurring, there are still challenges in applying image-deblurring models in practical transportation systems: the model size and the running time. This work proposes an artful variant-depth network (VDN) to address the challenges. We design variant-depth sub-networks in a coarse-to-fine manner to improve the deblurring effect. We also adopt a new connection namely stack connection to connect all sub-networks to reduce the running time and model size while maintaining high deblurring quality. We evaluate the proposed VDN with the state-of-the-art (SOTA) methods on several typical datasets. Results on Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) show that the VDN outperforms SOTA image-deblurring methods. Furthermore, the VDN also has the shortest running time and the smallest model size.