Residual network with detail perception loss for single image super-resolution

作者:

Highlights:

摘要

Recently, deep convolutional neural networks have demonstrated high-quality reconstruction for single image super-resolution. In this study, we present a network by using residual blocks with cascading simple blocks to improve the image resolution. Cascading simple blocks with a multi-layer perceptron are conducive to extract features and approximate a complex mapping with fewer parameters. Skip connections can help to alleviate the vanishing-gradient problem of deep networks. In addition, our network contains two pathways. One is to predict the high frequency information of the high resolution image and the other is to predict the low frequency information of the high resolution image. Then the information of two pathways is fused, and pixel-shuffle is used for upsampling. Moreover, to capture texture details of images, we introduce a novel loss function called detail perception loss, which is used to measure the difference of the wavelet coefficients from the reconstructed image and ground truth. By reducing detail perception loss, texture details of the reconstructed image are becoming more similar with texture details of ground truth. Extensive quantitative and qualitative experiments on four benchmark datasets show that our method achieves superior performance over typical single image super-resolution methods.

论文关键词:

论文评审过程:Received 26 December 2019, Revised 8 April 2020, Accepted 31 May 2020, Available online 3 June 2020, Version of Record 17 June 2020.

论文官网地址:https://doi.org/10.1016/j.cviu.2020.103007