StyleFuse: An unsupervised network based on style loss function for infrared and visible image fusion
作者:
Highlights:
• A novel unsupervised end-to-end image fusion model is proposed.
• The style fusion loss function is first designed to enhance the image fusion performance.
• The image fusion model utilizes two types of attention-based network connections.
• The experimental results outperform the state-of-the-art fusion methods.
摘要
•A novel unsupervised end-to-end image fusion model is proposed.•The style fusion loss function is first designed to enhance the image fusion performance.•The image fusion model utilizes two types of attention-based network connections.•The experimental results outperform the state-of-the-art fusion methods.
论文关键词:Image fusion,Infrared image,Visible image,Style loss,Style transfer
论文评审过程:Received 7 January 2022, Revised 6 April 2022, Accepted 21 April 2022, Available online 7 May 2022, Version of Record 12 May 2022.
论文官网地址:https://doi.org/10.1016/j.image.2022.116722