Discrete optimal Bayesian classification with error-conditioned sequential sampling

作者:

Highlights:

• A sampling algorithm for training the optimal Bayesian classifier is introduced.

• The algorithm works based on minimization of the expected error on the uncertainty class of prior knowledge.

• Using a Zipf model we show that our sampling algorithm leads to a less true error on average than random sampling.

• Our algorithm shows robustness even in case when prior knowledge drifts away from true distributions.

• An example on data from p53 network shows that our method works well on from real pathway data as well.

摘要

Highlights•A sampling algorithm for training the optimal Bayesian classifier is introduced.•The algorithm works based on minimization of the expected error on the uncertainty class of prior knowledge.•Using a Zipf model we show that our sampling algorithm leads to a less true error on average than random sampling.•Our algorithm shows robustness even in case when prior knowledge drifts away from true distributions.•An example on data from p53 network shows that our method works well on from real pathway data as well.

论文关键词:Optimal Bayesian classifier,Controlled sampling,Prior knowledge

论文评审过程:Received 1 December 2014, Revised 3 March 2015, Accepted 30 March 2015, Available online 16 April 2015, Version of Record 16 July 2015.

论文官网地址:https://doi.org/10.1016/j.patcog.2015.03.023