Inter-training: Exploiting unlabeled data in multi-classifier systems

作者:

Highlights:

摘要

We present a new and more general co-training style framework named Inter-training, to exploit unlabeled data in multi-classifier systems, and develop two concrete algorithms which employ some new strategies to iteratively retrain base classifiers. The decrease of diversity during iterations is a main problem which hinders the further improvement of co-training style algorithms. In this paper, we propose a method to recreate diversity among base classifiers by manipulating the pseudo-labeled data for co-training style algorithms. Furthermore, in the theoretical aspect, we define a hybrid classification and distribution (HCAD) noise and provide a Probably Approximately Correct (PAC) analysis for co-training style algorithms in the presence of HCAD noise. Experimental results on six datasets show that our method performs much better in practice, and the superiority is especially obvious on hardly-classified datasets.

论文关键词:Co-training,Multi-classifier systems,Noise,Diversity,PAC analysis

论文评审过程:Received 4 July 2012, Revised 20 January 2013, Accepted 25 January 2013, Available online 4 February 2013.

论文官网地址:https://doi.org/10.1016/j.knosys.2013.01.028