A deep learning framework for Hybrid Heterogeneous Transfer Learning
作者:
摘要
Most previous methods in heterogeneous transfer learning learn a cross-domain feature mapping between different domains based on some cross-domain instance-correspondences. Such instance-correspondences are assumed to be representative in the source domain and the target domain, respectively. However, in many real-world scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise, and thus the transformed source-domain labeled data using the feature mapping are not useful to build an accurate classifier for the target domain. In this paper, we offer a new heterogeneous transfer learning framework named Hybrid Heterogeneous Transfer Learning (HHTL), which allows the selection of corresponding instances across domains to be biased to the source or target domain. Our basic idea is that though the corresponding instances are biased in the original feature space, there may exist other feature spaces, projected onto which, the corresponding instances may become unbiased or representative to the source domain and the target domain, respectively. With such a representation, a more precise feature mapping across heterogeneous feature spaces can be learned for knowledge transfer. We design several deep-learning-based architectures and algorithms that enable learning aligned representations. Extensive experiments on two multilingual classification datasets verify the effectiveness of our proposed HHTL framework and algorithms compared with some state-of-the-art methods.
论文关键词:Heterogeneous transfer learning,Deep learning,Multilingual text classification
论文评审过程:Received 8 December 2018, Revised 25 March 2019, Accepted 4 June 2019, Available online 6 June 2019, Version of Record 3 July 2019.
论文官网地址:https://doi.org/10.1016/j.artint.2019.06.001