Absolute convergence and error thresholds in non-active adaptive sampling

作者:

Highlights:

摘要

Non-active adaptive sampling is a way of building machine learning models from a training data base which are supposed to dynamically and automatically derive guaranteed sample size. In this context and regardless of the strategy used in both scheduling and generating of weak predictors, a proposal for calculating absolute convergence and error thresholds is described. We not only make it possible to establish when the quality of the model no longer increases, but also supplies a proximity condition to estimate in absolute terms how close it is to achieving such a goal, thus supporting decision making for fine-tuning learning parameters in model selection. The technique proves its correctness and completeness with respect to our working hypotheses, in addition to strengthening the robustness of the sampling scheme. Tests meet our expectations and illustrate the proposal in the domain of natural language processing, taking the generation of part-of-speech taggers as case study.

论文关键词:Machine learning convergence,Non-active adaptive sampling,Pos tagging

论文评审过程:Received 23 May 2021, Revised 5 March 2022, Accepted 12 May 2022, Available online 19 May 2022, Version of Record 24 May 2022.

论文官网地址:https://doi.org/10.1016/j.jcss.2022.05.002