Representation learning with deep sparse auto-encoder for multi-task learning

作者:

Highlights:

• We propose a novel representation l earning method DSML to achieve a better performance based on deep sparse auto encoder for Multi task Learning.

• We propose a new stacked sparse auto encoder for feature reconstruction which can learn higher level and better representations by using the deep learning method and overcome the overfitting problem effectively.

• The training of parameters in DSML has lower computational cost than other common deep learning methods.

• Experimental studies of compared with seven representation l earning methods show that DSML is superior to traditional and state of the art representation l earning for Multi task Learning.

摘要

•We propose a novel representation l earning method DSML to achieve a better performance based on deep sparse auto encoder for Multi task Learning.•We propose a new stacked sparse auto encoder for feature reconstruction which can learn higher level and better representations by using the deep learning method and overcome the overfitting problem effectively.•The training of parameters in DSML has lower computational cost than other common deep learning methods.•Experimental studies of compared with seven representation l earning methods show that DSML is superior to traditional and state of the art representation l earning for Multi task Learning.

论文关键词:Deep sparse auto-encoder,Multi-task learning,RICA,Labeled and unlabeled data

论文评审过程:Received 6 February 2018, Revised 27 January 2022, Accepted 24 April 2022, Available online 29 April 2022, Version of Record 10 May 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.108742