ATPL: Mutually enhanced adversarial training and pseudo labeling for unsupervised domain adaptation
作者:
Highlights:
•
摘要
Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to a related but unlabeled target domain. Most existing approaches either adversarially reduce the domain shift or use pseudo-labels to provide category information during adaptation. However, an adversarial training method may sacrifice the discriminability of the target data, since no category information is available. Moreover, a pseudo labeling method is difficult to produce high-confidence samples, since the classifier is often source-trained and there exists the domain discrepancy. Thus, it may have a negative influence on learning target representations. A potential solution is to make them compensate each other to simultaneously guarantee the feature transferability and discriminability, which are the two key criteria of feature representations in domain adaptation. In this paper, we propose a novel method named ATPL, which mutually promotes Adversarial Training and Pseudo Labeling for unsupervised domain adaptation. ATPL can produce high-confidence pseudo-labels by adversarial training. Accordingly, ATPL will use the pseudo-labeled information to improve the adversarial training process, which can guarantee the feature transferability by generating adversarial data to fill in the domain gap. Those pseudo-labels can also boost the feature discriminability. Extensive experiments on real datasets demonstrate that the proposed ATPL method outperforms state-of-the-art unsupervised domain adaptation methods.
论文关键词:Adversarial training,Pseudo labeling,Domain adaptation
论文评审过程:Received 7 November 2021, Revised 22 March 2022, Accepted 14 April 2022, Available online 8 May 2022, Version of Record 21 May 2022.
论文官网地址:https://doi.org/10.1016/j.knosys.2022.108831