Dual contrastive universal adaptation network for multi-source visual recognition

作者:

Highlights:

• This is an early work that explores Universal Multi-Source Domain Adaptation (UniMDA) setting.

• The proposed method can tackle domain and category shift issues across domains.

• Experiments show that proposed method outperforms state-of-the-art methods about +4.26%.

摘要

•This is an early work that explores Universal Multi-Source Domain Adaptation (UniMDA) setting.•The proposed method can tackle domain and category shift issues across domains.•Experiments show that proposed method outperforms state-of-the-art methods about +4.26%.

论文关键词:Image classification,Contrastive learning,Universal multi-source domain adaptation,Deep learning

论文评审过程:Received 15 April 2022, Revised 27 July 2022, Accepted 4 August 2022, Available online 9 August 2022, Version of Record 22 August 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109632