Self-attention driven adversarial similarity learning network
作者:
Highlights:
• To address the problem of conventional similarity learning algorithms that take the entire given objects into account, we take the advantage of the self-attention mechanism to generate self-attention weighted feature maps of given objects and feed them to the further similarity learning step. It ensures the final achieved similarity scores are discriminative because of the semantic information that stored in these selected regions, rather than take the entire object into account.
• To address the problem of conventional similarity learning algorithms that only aim to distinguish objects and lack semantic interpretability of their obtained similarity score, we propose an interpretable similarity learning method.
• In addition, we add a generator-discriminator model with adversarial loss to force the topic vectors to capture and preserve the hidden semantic information from the self-attention weighted feature maps of given objects. This is accomplished by propagating the difference between objects generated from topic vectors and real objects to the similarity learning step.
• In order to obtain global optimized results and prevent the captured semantic information from being wiped during training, we combine the self-attention mechanism, the similarity learning section and the generator-discriminator section together and propose an end-to-end self-attention driven adversarial similarity learning network based on the joint loss function to simultaneously train the above components.
摘要
•To address the problem of conventional similarity learning algorithms that take the entire given objects into account, we take the advantage of the self-attention mechanism to generate self-attention weighted feature maps of given objects and feed them to the further similarity learning step. It ensures the final achieved similarity scores are discriminative because of the semantic information that stored in these selected regions, rather than take the entire object into account.•To address the problem of conventional similarity learning algorithms that only aim to distinguish objects and lack semantic interpretability of their obtained similarity score, we propose an interpretable similarity learning method.•In addition, we add a generator-discriminator model with adversarial loss to force the topic vectors to capture and preserve the hidden semantic information from the self-attention weighted feature maps of given objects. This is accomplished by propagating the difference between objects generated from topic vectors and real objects to the similarity learning step.•In order to obtain global optimized results and prevent the captured semantic information from being wiped during training, we combine the self-attention mechanism, the similarity learning section and the generator-discriminator section together and propose an end-to-end self-attention driven adversarial similarity learning network based on the joint loss function to simultaneously train the above components.
论文关键词:Self-attention mechanism,Adversarial loss,Similarity learning network,Explainable deep learning
论文评审过程:Received 20 June 2019, Revised 15 January 2020, Accepted 12 March 2020, Available online 7 May 2020, Version of Record 5 June 2020.
论文官网地址:https://doi.org/10.1016/j.patcog.2020.107331