Gated fusion network for SAO filter and inter frame prediction in Versatile Video Coding
作者:
Highlights:
• We presented a gated fusion-guided framework in our model design, which effectively combines the inter–intra frame local and temporal feature heterogeneity.
• Our decoupled model includes a modified loss function to constrain the pixel errors and incorporates the intermediate convolution feature maps through skip connections.
• Our loss function can be viewed as a generalization of MSE at each batch and adds image gradients as priors for final image reconstruction.
• A data-driven deconvolution framework is integrated into the decoder module to overcome the quantization artifacts.
• The end-to-end framework learns the feature map aggregation in separate sub-tasks, optimizes the parameters, and reduces the noise to a greater capacity.
• Our model’s qualitative and quantitative evaluation shows the effectiveness of artifact removal, especially at crowded target regions, and performs favorably against the existing in-loop deep learning models.
摘要
•We presented a gated fusion-guided framework in our model design, which effectively combines the inter–intra frame local and temporal feature heterogeneity.•Our decoupled model includes a modified loss function to constrain the pixel errors and incorporates the intermediate convolution feature maps through skip connections.•Our loss function can be viewed as a generalization of MSE at each batch and adds image gradients as priors for final image reconstruction.•A data-driven deconvolution framework is integrated into the decoder module to overcome the quantization artifacts.•The end-to-end framework learns the feature map aggregation in separate sub-tasks, optimizes the parameters, and reduces the noise to a greater capacity.•Our model’s qualitative and quantitative evaluation shows the effectiveness of artifact removal, especially at crowded target regions, and performs favorably against the existing in-loop deep learning models.
论文关键词:Artifacts,VVC,SAO,De-convolution,In-loop filter,Deep learning
论文评审过程:Received 19 January 2022, Revised 6 July 2022, Accepted 9 August 2022, Available online 19 August 2022, Version of Record 5 September 2022.
论文官网地址:https://doi.org/10.1016/j.image.2022.116839