DDAT: Dual domain adaptive translation for low-resolution face verification in the wild
作者:
Highlights:
•
摘要
Low-resolution (LR) face verification has received much attention because of its wide applicability in real scenarios, especially in long-distance surveillance. However, the poor quality and scarcity of training data make the accuracy far from satisfactory. In this paper, we propose an end-to-end LR face translation and verification framework to improve the generation quality of face images and face verification accuracy simultaneously. We design a dual domain adaptive structure to generate high-quality images. On one hand, the structure can reduce the domain gap between training data and test data. On the other hand, the structure preserves identity consistency and low-level attributes. Meanwhile, in order to make the whole model more robust, we treat the generated images of the target domain as an extension of the training data. We conduct extensive comparative experiments on multiple benchmark data sets. Experimental results verify that our method achieves improved results in high-quality face generation and LR face verification. In particular, our model DDAT reduces FID to 18.63 and 39.55 on the source and the target domain from 254.7 and 206.19 of the up-sampling results, respectively. Our method outperforms competing approaches by more than 10 percentage points in terms of face verification accuracy on multiple surveillance benchmarks.
论文关键词:Low-resolution face verification,Domain adaptation,Image translation,GAN
论文评审过程:Received 23 March 2020, Revised 19 May 2021, Accepted 5 June 2021, Available online 12 June 2021, Version of Record 26 June 2021.
论文官网地址:https://doi.org/10.1016/j.patcog.2021.108107